Muddu Sudhakar, Aisera | Supercloud22
(upbeat music) >> Welcome back everyone to Supercloud22, I'm John Furrier, host of theCUBE here in Palo Alto. For this next ecosystem's segment we have Muddu Sudhakar, who is the co-founder and CEO of Aisera, a friend of theCUBE, Cube alumni, serial entrepreneur, multiple exits, been on multiple times with great commentary. Muddu, thank you for coming on, and supporting our- >> Also thank you for having me, John. >> Yeah, thank you. Great handshake there, I love to do it. One, I wanted you here because, two reasons, one is, congratulations on your new funding. >> Thank you. >> For $90 million, Series D funding. >> Series D funding. >> So, huge validation in this market. >> It is. >> You have been experienced software so, it's a real testament to your team. But also, you're kind of in the Supercloud vortex. This new wave that Supercloud is part of is, I call it the pretext to what's coming with multi-clouds. It is the next level. >> I see. >> Structural change and we have been reporting on it, Dave and I, and we are being challenged. So, we decided to open it up. >> Very good, I would love it. >> And have a conversation rather than waiting eight months to prove that we are right. Which, we are right, but that is a long story. >> You're always right. (both laughs) >> What do you think of Supercloud, that's going on? What is the big trend? Because its public cloud is great, so there is no conflict there. >> Right. >> It's got great business, it's integrated, IaaS, to SaaS, PaaS, all in the beginning, or the middle. All that is called good. Now you have on-premise high rate cloud. >> Right. >> Edge is right around the corner. Exploding in new capabilities. So, complexity is still here. >> That's right, I think, you nailed it. We talk about hybrid cloud, and multi cloud. Supercloud is kind of elevates the message even better. Because you still have to leave for some of our clouds, public clouds. There will be some of our clouds, still running on the Edge. That's where, the Edge cloud comes in. Some will still be on-prem. So, the Supercloud as a concept is beyond hybrid and multi cloud. To me, I will run some of our cloud on Amazon. Some could be on Aisera, some could be running only on Edge, right? >> Mm hm >> And we still have, what we call remote executors. Some leaders of service now. You have, what we call the mid-server, is what I think it was called. Where you put in a small code and run it. >> Yeah. >> So, I think all those things will be running on-prem environment and VMware cloud, et cetera. >> And if you look back at, I think it has been five years now, maybe four or five years since Andy Jassy at reInvent announced Outposts. Think that was the moment in time that Dave and I took this pause back and said "Okay, that's Amazon." who listens to their customers. Acknowledging Hybrid. >> Right. >> Then we saw the rise of Snowflakes, the Databricks, specialty clouds. You start to see people who are building on top of AWS. But at MongoDB, it is a database, now they are a full blown, large scale data platform. These companies took advantage of the public cloud to build, as Jerry Chen calls it, "Castles in the cloud." >> Right. >> That seems to be happening in all areas. What do you think about that? >> Right, so what is driving the cloud? To me, we talk about machine learning in AI, right? Versus clouded options. We used to call it lift and shift. The outposts and lift and shift. Initially this was to get the data into the cloud. I think if you see, the vendor that I like the most, is, I'm not picking any favorite but, Microsoft Azure, they're thinking like your Supercloud, right? Amazon is other things, but Azure is a lot more because they run on-prem. They are also on Azure CloudFront, Amazon CloudFront. So I think, Azure and Amazon are doing a lot more in the area of Supercloud. What is really helping is the machine learning environment, needs Superclouds. Because I will be running some on the Edge, some compute, some will be running on the public cloud, some could be running on my data center. So, I think the Supercloud is really suited for AI and automation really well. >> Yeah, it is a good point about Microsoft, too. And I think Microsoft's existing install base saved Azure. >> Okay. >> They brought Office 365, Sequel Server, cause their customers weren't leaving Microsoft. They had the productivity thing nailed down as well as the ability to catch up >> That's right. >> To AWS. So, natural extension to on-premise with Microsoft. >> I think... >> Tell us- >> Your Supercloud is what Microsoft did. Right? Azure. If you think of, like, they had an Office 365, their SharePoint, their Dynamics, taking all of those properties, running on the Azure. And still giving the migration path into a data center. Is Supercloud. So, the early days Supercloud came from Azure. >> Well, that's a good point, we will certainly debate that. I will also say that Snowflake built on AWS. >> That's right. >> Okay, and became a super powerhouse with the data business. As did Databricks. >> That's right. >> Then went to Azure >> That's right. >> So, you're seeing kind of the Playbook. >> Right. >> Go fast on Cloud Native, the native cloud. Get that fly wheel going, then get going, somewhere else. >> It is, and to that point I think you and me are talking, right? If you are to start at one cloud and go to another cloud, the amount of work as a vendor for us to use for implement. Today, like we use all three clouds, including the Gov Cloud. It's a lot of work. So, what will happen, the next toolkit we use? Even services like Elastic. People will not, the word commoditize, is not the word, but people will create an abstraction layer, even for S3. >> Explain that, explain that in detail. So, elastic? What do you mean by that? >> Yeah, so what that means is today, Elasticsearch, if you do an Elasticsearch on Amazon, if I go to Azure, I don't want enter another Elasticsearch layer. Ideally I want us to write an abstracted search layer. So, that when I move my services into a different cloud I don't want to re-compute and re-calculate everything. That's a lot of work. Particularly once you have a production customer, if I were to shift the workloads, even to the point of infrastructure, take S3, if I read infrastructure to S3 and tomorrow I go to Azure. Azure will have its own objects store. I don't want to re-validate that. So what will happen is digital component, Kubernetes is already there, we want storage, we want network layer, we want VPM services, elastic as well as all fundamental stuff, including MongoDB, should be abstracted to run. On the Superclouds. >> Okay, well that is a little bit of a unicorn fantasy. But let's break that down. >> Sure. >> Do you think that's possible? >> It is. Because I think, if I am on MongoDB, I should be able to give a horizontal layer to MongoDB that is optimized for all three of them. I don't want MongoDB. >> First of all, everyone will buy that. >> Sure. >> I'm skeptical that that's possible. Given where we are at right now. So, you're saying that a vendor will provide an abstraction layer. >> No, I'm saying that either MongoDB, itself will do it, or a third party layer will come as a service which will abstract all this layer so that we will write to an AP layer. >> So what do you guys doing? How do you handle multiple clouds? You guys are taking that burden on, because it makes sense, you should build the abstraction layer. Not rely on a third party vendor right? >> We are doing it because there is no third party available offer it. But if you offer a third party tomorrow, I will use that as a Supercloud service. >> If they're 100% reliable? >> That's right. That's exactly it. >> They have to do the work. >> They have to do the work because if today I am doing it because no one else is offering it- >> Okay so what people might not know is that you are an angel investor as well as an entrepreneur been very successful, so you're rich, you have a lot of money. If I were a startup and I said, Muddu, I want to build this abstraction layer. What would be funding advice that you would give me as an entrepreneur? As a company to do that? >> I would do it like an Apigee that Google acquired, you should create an Apigee-like layer, for infrastructure upfront services, I think that is a very good option. >> And you think that is viable? >> It is very much viable. >> Would that be part of Supercloud architecture, in your opinion? >> It is. Right? And that will abstract all the clouds to some level. Like it is like Kubernetes abstract, so that if I am running on Kubernetes I can transfer to any cloud. >> Yeah >> But that should go from computer into other infrastructures. >> It's seems to me, Muddu, and I want to get your thoughts about this whole Supercloud defacto standard opportunity. It feels like we are waiting for a moment where there is some sort of defacto unification, whether it is in the distraction layer, or a standards body. There is no W3C here going on. I mean, W3C was for web consortium, for world wide web. The Supercloud seems to be having the same impact the web had. Transformative, disruptive, re-factoring business operations. Is there a standardized body or an opportunity for a defacto? Like Kubernetes was a great example of a unification around something for orchestration. Is there a better version in the Supercloud model where we need a standard? >> Yes and no. The reason is because by the time you come to standard, take time to look what happened. First, we started with VMs, then became Docker and Containers then we came to Kubernetes. So it goes through a journey. I think the next few years will be stood on SuperCloud let's make customers happy, let's make enough services going, and then the standards will come. Standards will be almost 2-3 years later. So I don't think standards should happen right now. Right now, all we need is, we need enough start ups to create the super layer abstraction, with the goal in mind of AI automation. The reason, AI is because AI needs to be able to run that. Automated because running a work flow is, I can either run a workflow in the cloud services, I can run it on on-prem, I can run it on database, so you have two good applications, take AI and automation with Supercloud and make enough enough noise on that make enough applications, then the standards will come. >> On this project we have been with SuperCloud these past day we have heard a lot of people talking. The themes that developers are okay, they are doing great. Open source is booming. >> Yes >> Cloud Native's got major traction. Developers are going fast and they love it, shifting left, all these great things. They're putting a lot of data, DevOps and the security teams, they're the ones who are leveling up. We are hearing a lot of conversations around how they can be faster. What is your view on this as relative to that Supercloud nirvana getting there? How are DevOps and security teams leveling up to devs? >> A couple of things. I think that in the world of DevSecOps and security ops. The reason security is important, right? Given what is going on, but you don't need to do security the manual way. I think that whole new operation that you and me talked about, AI ops should happen. Where the AI ops is for service operation, for performance, for incident or for security. Nobody thinks of AI security. So, the DevOps people should think more world of AI ops, so that I can predict, prevent things before they happen. Then the security will be much better. So AI ops with Supercloud will probably be that nirvana. But that is what should happen. >> In the AI side of things, what you guys are doing, what are you learning, on scale, relative to data? Is there, you said machine learning needs data, it needs scale operation. What's your view on the automation piece of all this? >> I think to me, the data is the single, underrated, unsung kind of hero in the whole machine learning. Everyone talks about AI and machine learning algorithms. Algorithms are as important, but even more important is data. Lack of data I can't do algorithms. So my advice to customers is don't lose your data. That is why I see, Frank, my old boss, setting everything up into the data cloud, in Snowflake. Data is so important, store the data, analyze the data. Data is the new AI. You and me talk so many times- >> Yeah >> It's underrated, people are not anticipating how important it is. But the data is coming from logs, events, whether there is knowledge documents, any data in any form. I think keep the data, analyze the data, data patterns, and then things like SuperCloud can really take advantage of that. >> So, in the Supercloud equation one of the things that has come up is that the native clouds do great. Their IaaS to SaaS is interactions that solve a lot of problems. There is integration that is good. >> Right. >> Now when you go off cloud, you get regions, get latency issues- >> Right >> You have more complexity. So what's the trade off in the Supercloud journey, if you had to guess? And just thinking out loud here, what would be some of the architectural trade offs of how you do it, what's the sequence? What's the order of operations to get Superclouding going? >> Yeah, very good questions here. I think once you start going from the public cloud, the clouds there scale to lets say, even a regional data center onto an Edge, latency will kick in. The lack of computer function will kick in. So there I think everything should become asynchronous, right? You will run the application in a limited environment. You should anticipate for small memories, small compute, long latencies, but still following should happen. So some operations should become the old-school following, like, it's like the email. I send an email, it's an asynchronous thing, I made a sponsor, I think most of message passing should go back to the old-school architectures They should become asynchronous where thing can rely. I think, as long as algorithms can take that into Edge, I think that Superclouds can really bridge between the public cloud to the edge. >> Muddu, thanks for coming, we really appreciate your insights here. You've always been a great friend, great commentator. If you weren't the CEO and a famous angel investor, we would certainly love to have you as a theCUBE analyst, here on theCUBE. >> I am always available for you. (John laughs) >> When you retire, you can come back. Final point, we've got time left. We'll give you a chance to talk about the company. I'm really intrigued by the success of your ninety million dollar financing realm because we are in a climate where people aren't getting those kinds of investments. It's usually down-rounds. >> Okay >> 409 adjustments, people are struggling. You got an up-round and you got a big number. Why the success? What is going on with the company? Why are you guys getting such great validation? Goldman Sachs, Thoma Bravo, Zoom, these are big names, these are the next gen winners. >> It is. >> Why are they picking you? Why are they investing in you? >> I think it is not one thing, it is many things. First all, I think it is a four-year journey for us where we are right now. So, the company started late 2017. It is getting the right customers, partners, employees, team members. So it is a lot hard work went in. So a lot of thanks to the Aisera community for where we are. Why customers and where we are? Look, fundamentally there is a problem to solve. Like, what Aisera is trying to solve is can we automate customer service? Whether internal employees, external customer support. Do it for IT, HR, sales, marketing, all the way to ops. Like you talk about DevSecOps, I don't want thousands of tune ups for ops. If I can make that job better, >> Yeah >> I want to, any job I want to automate. I call it, elevate the human, right? >> Yeah. >> And that's the reason- >> 'Cause you're saying people have to learn specialty tools, and there are consequences to that. >> Right, and to me, people should focus on more important tasks and use AI as a tool to automate those things right? It's like thinking of offering Apple City as Alexa as a service, that is how we are trying to offer customer service, like, right? And if it can do that consistently, and reduce costs, cost is a big reason why customers like us a lot, we have eliminated the cost in this down economy, I will amplify our message even more, right? I am going to take a bite out of their expense. Whether it is tool expense, it's on resources. Second, is user productivity And finally, experience. People want experience. >> Final question, folks out there, first of all, what do you think about Supercloud? And if someone asks you what is this Supercloud thing? How would you answer? >> Supercloud, is, to me, beyond multi cloud and hybrid cloud. It is to bridge applications that are build in Supercloud can run on all clouds seamlessly. You don't need to compile them, re-clear them. Supercloud is one place to build, develop, and deploy. >> Great, Muddu. Thank you for coming on. Supercloud22 here breaking it down with the ecosystem commentary, we have a lot of people coming to the small group of experts in our network, bringing you in open conversation around the future of cloud computing and applications globally. And again, it is all about the next generation cloud. This is theCUBE, thanks for watching. (upbeat music)
SUMMARY :
Muddu, thank you for coming Great handshake there, I love to do it. I call it the pretext to what's Dave and I, and we are being challenged. to prove that we are right. You're always right. What is the big trend? the beginning, or the middle. Edge is right around the corner. So, the Supercloud as a concept is beyond And we still have, what things will be running And if you look back at, of the public cloud to build, What do you think about that? I think if you see, And I think Microsoft's existing They had the productivity So, natural extension to And still giving the migration I will also say that Okay, and became a super powerhouse Native, the native cloud. and to that point I think you What do you mean by that? Kubernetes is already there, we want storage, But let's break that down. I should be able to give a a vendor will provide so that we will write to an AP layer. So what do you guys doing? I will use that as a Supercloud service. That's right. that you would give me I think that is a very good option. the clouds to some level. But that should go from computer in the Supercloud model in the cloud services, a lot of people talking. DevOps and the security teams, Then the security will be much better. what you guys are doing, I think to me, the data But the data is coming from logs, events, is that the native clouds do great. in the Supercloud journey, between the public cloud to the edge. have you as a theCUBE analyst, I am always available for you. I'm really intrigued by the success Why the success? So a lot of thanks to the Aisera I call it, elevate the human, right? and there are consequences to that. I am going to take a bite It is to bridge around the future of cloud computing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Aisera | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
$90 million | QUANTITY | 0.99+ |
Muddu Sudhakar | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
four-year | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Muddu | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
five years | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
late 2017 | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two reasons | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Elasticsearch | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
MongoDB | TITLE | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
W3C | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
Office 365 | TITLE | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
Elastic | TITLE | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Aisera | PERSON | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
two good applications | QUANTITY | 0.98+ |
ninety million dollar | QUANTITY | 0.97+ |
thousands | QUANTITY | 0.96+ |
409 adjustments | QUANTITY | 0.96+ |
Dynamics | TITLE | 0.96+ |
single | QUANTITY | 0.96+ |
three | QUANTITY | 0.95+ |
Azure | TITLE | 0.95+ |
SharePoint | TITLE | 0.94+ |
Gov Cloud | TITLE | 0.94+ |
Edge | TITLE | 0.94+ |
Kubernetes | TITLE | 0.94+ |
Zoom | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.93+ |
SuperCloud | ORGANIZATION | 0.93+ |
one cloud | QUANTITY | 0.91+ |
Ameesh Divatia, Baffle | AWS re:Inforce 2022
(upbeat music) >> Okay, welcome back everyone in live coverage here at theCUBE, Boston, Massachusetts, for AWS re:inforce 22 security conference for Amazon Web Services. Obviously reinvent the end of the years' the big celebration, "re:Mars" is the new show that we've covered as well. The res are here with theCUBE. I'm John Furrier, host with a great guest, Ameesh Divatia, co-founder, and CEO of a company called "Baffle." Ameesh, thanks for joining us on theCUBE today, congratulations. >> Thank you. It's good to be here. >> And we got the custom encrypted socks. >> Yup, limited edition >> 64 bitter 128. >> Base 64 encoding. >> Okay.(chuckles) >> Secret message in there. >> Okay.(chuckles) Secret message.(chuckles) We'll have to put a little meme on the internet, figure it out. Well, thanks for comin' on. You guys are goin' hot right now. You guys a hot startup, but you're in an area that's going to explode, we believe. >> Yeah. >> The SuperCloud is here, we've been covering that on theCUBE that people are building on top of the Amazon Hyperscalers. And without the capex, they're building platforms. The application tsunami has come and still coming, it's not stopping. Modern applications are faster, they're better, and they're driving a lot of change under the covers. >> Absolutely. Yeah. >> And you're seeing structural change happening in real time, in ops, the network. You guys got something going on in the encryption area. >> Yes >> Data. Talk about what you guys do. >> Yeah. So we believe very strongly that the next frontier in security is data. We've had multiple waves in security. The next one is data, because data is really where the threats will persist. If the data shows up in the wrong place, you get into a lot of trouble with compliance. So we believe in protecting the data all the way down at the field, or record level. That's what we do. >> And you guys doing all kinds of encryption, or other things? >> Yes. So we do data transformation, which encompasses three different things. It can be tokenization, which is format preserving. We do real encryption with counter mode, or we can do masked views. So tokenization, encryption, and masking, all with the same platform. >> So pretty wide ranging capabilities with respect to having that kind of safety. >> Yes. Because it all depends on how the data is used down the road. Data is created all the time. Data flows through pipelines all the time. You want to make sure that you protect the data, but don't lose the utility of the data. That's where we provide all that flexibility. >> So Kurt was on stage today on one of the keynotes. He's the VP of the platform at AWS. >> Yes. >> He was talking about encrypts, everything. He said it needs, we need to rethink encryption. Okay, okay, good job. We like that. But then he said, "We have encryption at rest." >> Yes. >> That's kind of been there, done that. >> Yes. >> And, in-flight? >> Yeah. That's been there. >> But what about in-use? >> So that's exactly what we plug. What happens right now is that data at rest is protected because of discs that are already self-encrypting, or you have transparent data encryption that comes native with the database. You have data in-flight that is protected because of SSL. But when the data is actually being processed, it's in the memory of the database or datastore, it is exposed. So the threat is, if the credentials of the database are compromised, as happened back then with Starwood, or if the cloud infrastructure is compromised with some sort of an insider threat like a Capital One, that data is exposed. That's precisely what we solve by making sure that the data is protected as soon as it's created. We use standard encryption algorithms, AES, and we either do format preserving, or true encryption with counter mode. And that data, it doesn't really matter where it ends up, >> Yeah. >> because it's always protected. >> Well, that's awesome. And I think this brings up the point that we want been covering on SiliconAngle in theCUBE, is that there's been structural change that's happened, >> Yes. >> called cloud computing, >> Yes. >> and then hybrid. Okay. Scale, role of data, higher level abstraction of services, developers are in charge, value creations, startups, and big companies. That success is causing now, a new structural change happening now. >> Yes. >> This is one of them. What areas do you see that are happening right now that are structurally changing, that's right in front of us? One is, more cloud native. So the success has become now the problem to solve - >> Yes. >> to get to the next level. >> Yeah. >> What are those, some of those? >> What we see is that instead of security being an afterthought, something that you use as a watchdog, you create ways of monitoring where data is being exposed, or data is being exfiltrated, you want to build security into the data pipeline itself. As soon as data is created, you identify what is sensitive data, and you encrypt it, or tokenize it as it flows into the pipeline using things like Kafka plugins, or what we are very clearly differentiating ourselves with is, proxy architectures so that it's completely transparent. You think you're writing to the datastore, but you're actually writing to the proxy, which in turn encrypts the data before its stored. >> Do you think that's an efficient way to do it, or is the only way to do it? >> It is a much more efficient way of doing it because of the fact that you don't need any app-dev resources. There are many other ways of doing it. In fact, the cloud vendors provide development kits where you can just go do it yourself. So that is actually something that we completely avoid. And what makes it really, really interesting is that once the data is encrypted in the data store, or database, we can do what is known as "Privacy Enhanced Computation." >> Mm. >> So we can actually process that data without decrypting it. >> Yeah. And so proxies then, with cloud computing, can be very fast, not a bottleneck that could be. >> In fact, the cloud makes it so. It's very hard to - >> You believe that? >> do these things in static infrastructure. In the cloud, there's infinite amount of processing available, and there's containerization. >> And you have good network. >> You have very good network, you have load balancers, you have ways of creating redundancy. >> Mm. So the cloud is actually enabling solutions like this. >> And the old way, proxies were seen as an architectural fail, in the old antiquated static web. >> And this is where startups don't have the baggage, right? We didn't have that baggage. (John laughs) We looked at the problem and said, of course we're going to use a proxy because this is the best way to do this in an efficient way. >> Well, you bring up something that's happening right now that I hear a lot of CSOs and CIOs and executives say, CXOs say all the time, "Our", I won't say the word, "Our stuff has gotten complicated." >> Yes. >> So now I have tool sprawl, >> Yeah. >> I have skill gaps, and on the rise, all these new managed services coming at me from the vendors who have never experienced my problem. And their reaction is, they don't get my problem, and they don't have the right solutions, it's more complexity. They solve the complexity by adding more complexity. >> Yes. I think we, again, the proxy approach is a very simple. >> That you're solving that with that approach. >> Exactly. It's very simple. And again, we don't get in the way. That's really the the biggest differentiator. The forcing function really here is compliance, right? Because compliance is forcing these CSOs to actually adopt these solutions. >> All right, so love the compliance angle, love the proxy as an ease of use, take the heavy lifting away, no operational problems, and deviations. Now let's talk about workloads. >> Yeah. >> 'Cause this is where the use is. So you got, or workloads being run large scale, lot a data moving around, computin' as well. What's the challenge there? >> I think it's the volume of the data. Traditional solutions that we're relying on legacy tokenizations, I think would replicate the entire storage because it would create a token wall, for example. You cannot do that at this scale. You have to do something that's a lot more efficient, which is where you have to do it with a cryptography approach. So the workloads are diverse, lots of large files in the workloads as well as structured workloads. What we have is a solution that actually goes across the board. We can do unstructured data with HTTP proxies, we can do structured data with SQL proxies. And that's how we are able to provide a complete solution for the pipeline. >> So, I mean, show about the on-premise versus the cloud workload dynamic right now. Hybrid is a steady state right now. >> Yeah. >> Multi-cloud is a consequence of having multiple vendors, not true multi-cloud but like, okay, they have Azure there, AWS here, I get that. But hybrid really is the steady state. >> Yes. >> Cloud operations. How are the workloads and the analytics the data being managed on-prem, and in the cloud, what's their relationship? What's the trend? What are you seeing happening there? >> I think the biggest trend we see is pipelining, right? The new ETL is streaming. You have these Kafka and Kinesis capabilities that are coming into the picture where data is being ingested all the time. It is not a one time migration. It's a stream. >> Yeah. >> So plugging into that stream is very important from an ingestion perspective. >> So it's not just a watchdog. >> No. >> It's the pipelining. >> It's built in. It's built-in, it's real time, that's where the streaming gets another diverse access to data. >> Exactly. >> Data lakes. You got data lakes, you have pipeline, you got streaming, you mentioned that. So talk about the old school OLTP, the old BI world. I think Power BI's like a $30 billion product. >> Yeah. >> And you got Tableau built on OLTP building cubes. Aren't we just building cubes in a new way, or, >> Well. >> is there any relevance to the old school? >> I think there, there is some relevance and in fact that's again, another place where the proxy architecture really helps, because it doesn't matter when your application was built. You can use Tableau, which nobody has any control over, and still process encrypted data. And so can with Power BI, any Sequel application can be used. And that's actually exactly what we like to. >> So we were, I was talking to your team, I knew you were coming on, and they gave me a sound bite that I'm going to read to the audience and I want to get your reaction to. >> Sure. >> 'Cause I love this. I fell out of my chair when I first read this. "Data is the new oil." In 2010 that was mentioned here on theCUBE, of course. "Data is the new oil, but we have to ensure that it does not become the next asbestos." Okay. That is really clever. So we all know about asbestos. I add to the Dave Vellante, "Lead paint too." Remember lead paint? (Ameesh laughs) You got to scrape it out and repaint the house. Asbestos obviously causes a lot of cancer. You know, joking aside, the point is, it's problematic. >> It's the asset. >> Explain why that sentence is relevant. >> Sure. It's the assets and liabilities argument, right? You have an asset which is data, but thanks to compliance regulations and Gartner says 75% of the world will be subject to privacy regulations by 2023. It's a liability. So if you don't store your data well, if you don't process your data responsibly, you are going to be liable. So while it might be the oil and you're going to get lots of value out of it, be careful about the, the flip side. >> And the point is, there could be the "Grim Reaper" waiting for you if you don't do it right, the consequences that are quantified would be being out of business. >> Yes. But here's something that we just discovered actually from our survey that we did. While 93% of respondents said that they have had lots of compliance related effects on their budgets. 75% actually thought that it makes them better. They can use the security postures as a competitive differentiator. That's very heartening to us. We don't like to sell the fear aspect of this. >> Yeah. We like to sell the fact that you look better compared to your neighbor, if you have better data hygiene, back to the. >> There's the fear of missing out, or as they say, "Keeping up with the Joneses", making sure that your yard looks better than the next one. I get the vanity of that, but you're solving real problems. And this is interesting. And I want to get your thoughts on this. I found, I read that you guys protect more than a 100 billion records across highly regulated industries. Financial services, healthcare, industrial IOT, retail, and government. Is that true? >> Absolutely. Because what we are doing is enabling SaaS vendors to actually allow their customers to control their data. So we've had the SaaS vendor who has been working with us for over three years now. They store confidential data from 30 different banks in the country. >> That's a lot of records. >> That's where the record, and. >> How many customers do you have? >> Well, I think. >> The next round of funding's (Ameesh laughs) probably they're linin' up to put money into you guys. >> Well, again, this is a very important problem, and there are, people's businesses are dependent on this. We're just happy to provide the best tool out there that can do this. >> Okay, so what's your business model behind? I love the success, by the way, I wanted to quote that stat to one verify it. What's the business model service, software? >> The business model is software. We don't want anybody to send us their confidential data. We embed our software into our customers environments. In case of SaaS, we are not even visible, we are completely embedded. We are doing other relationships like that right now. >> And they pay you how? >> They pay us based on the volume of the data that they're protecting. >> Got it. >> That in that case which is a large customers, large enterprise customers. >> Pay as you go. >> It is pay as you go, everything is annual licenses. Although, multi-year licenses are very common because once you adopt the solution, it is very sticky. And then for smaller customers, we do base our pricing also just on databases. >> Got it. >> The number of databases. >> And the technology just reviewed low-code, no-code implementation kind of thing, right? >> It is by definition, no code when it comes to proxy. >> Yeah. >> When it comes to API integration, it could be low code. Yeah, it's all cloud-friendly, cloud-native. >> No disruption to operations. >> Exactly. >> That's the culprit. >> Well, yeah. >> Well somethin' like non-disruptive operations.(laughs) >> No, actually I'll give an example of a migration, right? We can do live migrations. So while the databases are still alive, as you write your. >> Live secure migrations. >> Exactly. You're securing - >> That's the one that manifests. >> your data as it migrates. >> Awright, so how much funding have you guys raised so far? >> We raised 36 and a half, series A, and B now. We raised that late last year. >> Congratulations. >> Thank you. >> Who's the venture funders? >> True Ventures is our largest investor, followed by Celesta Capital, National Grid Partners is an investor, and so is Engineering Capital and Clear Vision Ventures. >> And the seed and it was from Engineering? >> Seed was from Engineering. >> Engineering Capital. >> And then True came in very early on. >> Okay. >> Greenspring is also an investor in us, so is Industrial Ventures. >> Well, privacy has a big concern, big application for you guys. Privacy, secure migrations. >> Very much so. So what we are believe very strongly in the security's personal, security is yours and my data. Privacy is what the data collector is responsible for. (John laughs) So the enterprise better be making sure that they've complied with privacy regulations because they don't tell you how to protect the data. They just fine you. >> Well, you're not, you're technically long, six year old start company. Six, seven years old. >> Yeah. >> Roughly. So yeah, startups can go on long like this, still startup, privately held, you're growing, got big records under management there, congratulations. What's next? >> I think scaling the business. We are seeing lots of applications for this particular solution. It's going beyond just regulated industries. Like I said, it's a differentiating factor now. >> Yeah >> So retail, and a lot of other IOT related industrial customers - >> Yeah. >> are also coming. >> Ameesh, talk about the show here. We're at re:inforce, actually we're live here on the ground, the show floor buzzing. What's your takeaway? What's the vibe this year? What if you had to share what your opinion the top story here at the show, what would be the two top things, or three things? >> I think it's two things. First of all, it feels like we are back. (both laugh) It's amazing to see people on the show floor. >> Yeah. >> People coming in and asking questions and getting to see the product. The second thing that I think is very gratifying is, people come in and say, "Oh, I've heard of you guys." So thanks to digital media, and digital marketing. >> They weren't baffled. They want baffled. >> Exactly. >> They use baffled. >> Looks like, our outreach has helped, >> Yeah. >> and has kept the continuity, which is a big deal. >> Yeah, and now you're a CUBE alumni, welcome to the fold. >> Thank you. >> Appreciate you coming on. And we're looking forward to profiling you some day in our startup showcase, and certainly, we'll see you in the Palo Alto studios. Love to have you come in for a deeper dive. >> Sounds great. Looking forward to it. >> Congratulations on all your success, and thanks for coming on theCUBE, here at re:inforce. >> Thank you, John. >> Okay, we're here in, on the ground live coverage, Boston, Massachusetts for AWS re:inforce 22. I'm John Furrier, your host of theCUBE with Dave Vellante, who's in an analyst session, right? He'll be right back with us on the next interview, coming up shortly. Thanks for watching. (gentle music)
SUMMARY :
is the new show that we've It's good to be here. meme on the internet, that people are building on Yeah. on in the encryption area. Talk about what you guys do. strongly that the next frontier So tokenization, encryption, and masking, that kind of safety. Data is created all the time. He's the VP of the platform at AWS. to rethink encryption. by making sure that the data is protected the point that we want been and then hybrid. So the success has become now the problem into the data pipeline itself. of the fact that you don't without decrypting it. that could be. In fact, the cloud makes it so. In the cloud, you have load balancers, you have ways Mm. So the cloud is actually And the old way, proxies were seen don't have the baggage, right? say, CXOs say all the time, and on the rise, all these the proxy approach is a very solving that with that That's really the love the proxy as an ease of What's the challenge there? So the workloads are diverse, So, I mean, show about the But hybrid really is the steady state. and in the cloud, what's coming into the picture So plugging into that gets another diverse access to data. So talk about the old school OLTP, And you got Tableau built the proxy architecture really helps, bite that I'm going to read "Data is the new oil." that sentence is relevant. 75% of the world will be And the point is, there could from our survey that we did. that you look better compared I get the vanity of that, but from 30 different banks in the country. up to put money into you guys. provide the best tool out I love the success, In case of SaaS, we are not even visible, the volume of the data That in that case It is pay as you go, It is by definition, no When it comes to API like still alive, as you write your. Exactly. That's the one that We raised that late last year. True Ventures is our largest investor, Greenspring is also an investor in us, big application for you guys. So the enterprise better be making sure Well, you're not, So yeah, startups can I think scaling the business. Ameesh, talk about the show here. on the show floor. see the product. They want baffled. and has kept the continuity, Yeah, and now you're a CUBE alumni, in the Palo Alto studios. Looking forward to it. and thanks for coming on the ground live coverage,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kurt | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ameesh | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
National Grid Partners | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
six year | QUANTITY | 0.99+ |
Engineering Capital | ORGANIZATION | 0.99+ |
$30 billion | QUANTITY | 0.99+ |
Six | QUANTITY | 0.99+ |
Celesta Capital | ORGANIZATION | 0.99+ |
Ameesh Divatia | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Clear Vision Ventures | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
30 different banks | QUANTITY | 0.99+ |
Greenspring | ORGANIZATION | 0.99+ |
True Ventures | ORGANIZATION | 0.99+ |
True | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
2023 | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Power BI | TITLE | 0.98+ |
seven years | QUANTITY | 0.98+ |
over three years | QUANTITY | 0.98+ |
Dave Vellante | PERSON | 0.98+ |
First | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Tableau | TITLE | 0.98+ |
first | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
36 and a half | QUANTITY | 0.97+ |
second thing | QUANTITY | 0.97+ |
one time | QUANTITY | 0.97+ |
series A | OTHER | 0.97+ |
this year | DATE | 0.96+ |
late last year | DATE | 0.96+ |
Baffle | ORGANIZATION | 0.96+ |
Capital One | ORGANIZATION | 0.96+ |
Industrial Ventures | ORGANIZATION | 0.96+ |
128 | QUANTITY | 0.95+ |
Boston, | LOCATION | 0.95+ |
Kafka | TITLE | 0.95+ |
more than a 100 billion records | QUANTITY | 0.95+ |
Starwood | ORGANIZATION | 0.94+ |
two top things | QUANTITY | 0.93+ |
Boston, Massachusetts | LOCATION | 0.93+ |
CUBE | ORGANIZATION | 0.91+ |
SQL | TITLE | 0.89+ |
re:Mars | TITLE | 0.88+ |
capex | ORGANIZATION | 0.87+ |
three different things | QUANTITY | 0.86+ |
One | QUANTITY | 0.85+ |
64 | QUANTITY | 0.83+ |
Azure | TITLE | 0.83+ |
Hyperscalers | COMMERCIAL_ITEM | 0.82+ |
OLTP | TITLE | 0.8+ |
Massachusetts | LOCATION | 0.67+ |
re:inforce 22 security conference | EVENT | 0.65+ |
SiliconAngle | ORGANIZATION | 0.59+ |
Computation | OTHER | 0.55+ |
SuperCloud | ORGANIZATION | 0.55+ |
Sequel | TITLE | 0.53+ |
Kinesis | ORGANIZATION | 0.48+ |
2022 | DATE | 0.41+ |
Joneses | TITLE | 0.27+ |
theCUBE Insights with Industry Analysts | Snowflake Summit 2022
>>Okay. Okay. We're back at Caesar's Forum. The Snowflake summit 2022. The cubes. Continuous coverage this day to wall to wall coverage. We're so excited to have the analyst panel here, some of my colleagues that we've done a number. You've probably seen some power panels that we've done. David McGregor is here. He's the senior vice president and research director at Ventana Research. To his left is Tony Blair, principal at DB Inside and my in the co host seat. Sanjeev Mohan Sanremo. Guys, thanks so much for coming on. I'm glad we can. Thank you. You're very welcome. I wasn't able to attend the analyst action because I've been doing this all all day, every day. But let me start with you, Dave. What have you seen? That's kind of interested you. Pluses, minuses. Concerns. >>Well, how about if I focus on what I think valuable to the customers of snowflakes and our research shows that the majority of organisations, the majority of people, do not have access to analytics. And so a couple of things they've announced I think address those are helped to address those issues very directly. So Snow Park and support for Python and other languages is a way for organisations to embed analytics into different business processes. And so I think that will be really beneficial to try and get analytics into more people's hands. And I also think that the native applications as part of the marketplace is another way to get applications into people's hands rather than just analytical tools. Because most most people in the organisation or not, analysts, they're doing some line of business function. Their HR managers, their marketing people, their salespeople, their finance people right there, not sitting there mucking around in the data. They're doing a job and they need analytics in that job. So, >>Tony, I thank you. I've heard a lot of data mesh talk this week. It's kind of funny. Can't >>seem to get away from it. You >>can't see. It seems to be gathering momentum, but But what have you seen? That's been interesting. >>What I have noticed. Unfortunately, you know, because the rooms are too small, you just can't get into the data mesh sessions, so there's a lot of interest in it. Um, it's still very I don't think there's very much understanding of it, but I think the idea that you can put all the data in one place which, you know, to me, stuff like it seems to be kind of sort of in a way, it sounds like almost like the Enterprise Data warehouse, you know, Clouded Cloud Native Edition, you know, bring it all in one place again. Um, I think it's providing, sort of, You know, it's I think, for these folks that think this might be kind of like a a linchpin for that. I think there are several other things that actually that really have made a bigger impression on me. Actually, at this event, one is is basically is, um we watch their move with Eunice store. Um, and it's kind of interesting coming, you know, coming from mongo db last week. And I see it's like these two companies seem to be going converging towards the same place at different speeds. I think it's not like it's going to get there faster than Mongo for a number of different reasons, but I see like a number of common threads here. I mean, one is that Mongo was was was a company. It's always been towards developers. They need you know, start cultivating data, people, >>these guys going the other way. >>Exactly. Bingo. And the thing is that but they I think where they're converging is the idea of operational analytics and trying to serve all constituencies. The other thing, which which also in terms of serving, you know, multiple constituencies is how snowflake is laid out Snow Park and what I'm finding like. There's an interesting I economy. On one hand, you have this very ingrained integration of Anaconda, which I think is pretty ingenious. On the other hand, you speak, let's say, like, let's say the data robot folks and say, You know something our folks wanna work data signs us. We want to work in our environment and use snowflake in the background. So I see those kind of some interesting sort of cross cutting trends. >>So, Sandy, I mean, Frank Sullivan, we'll talk about there's definitely benefits into going into the walled garden. Yeah, I don't think we dispute that, but we see them making moves and adding more and more open source capabilities like Apache iceberg. Is that a Is that a move to sort of counteract the narrative that the data breaks is put out there. Is that customer driven? What's your take on that? >>Uh, primarily I think it is to contract this whole notion that once you move data into snowflake, it's a proprietary format. So I think that's how it started. But it's hugely beneficial to the customers to the users, because now, if you have large amounts of data in parquet files, you can leave it on s three. But then you using the the Apache iceberg table format. In a snowflake, you get all the benefits of snowflakes. Optimizer. So, for example, you get the, you know, the micro partitioning. You get the meta data. So, uh, in a single query, you can join. You can do select from a snowflake table union and select from iceberg table, and you can do store procedures, user defined functions. So I think they what they've done is extremely interesting. Uh, iceberg by itself still does not have multi table transactional capabilities. So if I'm running a workload, I might be touching 10 different tables. So if I use Apache iceberg in a raw format, they don't have it. But snowflake does, >>right? There's hence the delta. And maybe that maybe that closes over time. I want to ask you as you look around this I mean the ecosystems pretty vibrant. I mean, it reminds me of, like reinvent in 2013, you know? But then I'm struck by the complexity of the last big data era and a dupe and all the different tools. And is this different, or is it the sort of same wine new new bottle? You guys have any thoughts on that? >>I think it's different and I'll tell you why. I think it's different because it's based around sequel. So if back to Tony's point, these vendors are coming at this from different angles, right? You've got data warehouse vendors and you've got data lake vendors and they're all going to meet in the middle. So in your case, you're taught operational analytical. But the same thing is true with Data Lake and Data Warehouse and Snowflake no longer wants to be known as the Data Warehouse. There a data cloud and our research again. I like to base everything off of that. >>I love what our >>research shows that organisation Two thirds of organisations have sequel skills and one third have big data skills, so >>you >>know they're going to meet in the middle. But it sure is a lot easier to bring along those people who know sequel already to that midpoint than it is to bring big data people to remember. >>Mrr Odula, one of the founders of Cloudera, said to me one time, John Kerry and the Cube, that, uh, sequel is the killer app for a Yeah, >>the difference at this, you know, with with snowflake, is that you don't have to worry about taming the zoo. Animals really have thought out the ease of use, you know? I mean, they thought about I mean, from the get go, they thought of too thin to polls. One is ease of use, and the other is scale. And they've had. And that's basically, you know, I think very much differentiates it. I mean, who do have the scale, but it didn't have the ease of use. But don't I >>still need? Like, if I have, you know, governance from this vendor or, you know, data prep from, you know, don't I still have to have expertise? That's sort of distributed in those those worlds, right? I mean, go ahead. Yeah. >>So the way I see it is snowflake is adding more and more capabilities right into the database. So, for example, they've they've gone ahead and added security and privacy so you can now create policies and do even set level masking, dynamic masking. But most organisations have more than snowflake. So what we are starting to see all around here is that there's a whole series of data catalogue companies, a bunch of companies that are doing dynamic data masking security and governance data observe ability, which is not a space snowflake has gone into. So there's a whole ecosystem of companies that that is mushrooming, although, you know so they're using the native capabilities of snowflake, but they are at a level higher. So if you have a data lake and a cloud data warehouse and you have other, like relational databases, you can run these cross platform capabilities in that layer. So so that way, you know, snowflakes done a great job of enabling that ecosystem about >>the stream lit acquisition. Did you see anything here that indicated there making strong progress there? Are you excited about that? You're sceptical. Go ahead. >>And I think it's like the last mile. Essentially. In other words, it's like, Okay, you have folks that are basically that are very, very comfortable with tableau. But you do have developers who don't want to have to shell out to a separate tool. And so this is where Snowflake is essentially working to address that constituency, um, to San James Point. I think part of it, this kind of plays into it is what makes this different from the ado Pere is the fact that this all these capabilities, you know, a lot of vendors are taking it very seriously to make put this native obviously snowflake acquired stream. Let's so we can expect that's extremely capabilities are going to be native. >>And the other thing, too, about the Hadoop ecosystem is Claudia had to help fund all those different projects and got really, really spread thin. I want to ask you guys about this super cloud we use. Super Cloud is this sort of metaphor for the next wave of cloud. You've got infrastructure aws, azure, Google. It's not multi cloud, but you've got that infrastructure you're building a layer on top of it that hides the underlying complexities of the primitives and the a p I s. And you're adding new value in this case, the data cloud or super data cloud. And now we're seeing now is that snowflake putting forth the notion that they're adding a super path layer. You can now build applications that you can monetise, which to me is kind of exciting. It makes makes this platform even less discretionary. We had a lot of talk on Wall Street about discretionary spending, and that's not discretionary. If you're monetising it, um, what do you guys think about that? Is this something that's that's real? Is it just a figment of my imagination, or do you see a different way of coming any thoughts on that? >>So, in effect, they're trying to become a data operating system, right? And I think that's wonderful. It's ambitious. I think they'll experience some success with that. As I said, applications are important. That's a great way to deliver information. You can monetise them, so you know there's there's a good economic model around it. I think they will still struggle, however, with bringing everything together onto one platform. That's always the challenge. Can you become the platform that's hard, hard to predict? You know, I think this is This is pretty exciting, right? A lot of energy, a lot of large ecosystem. There is a network effect already. Can they succeed in being the only place where data exists? You know, I think that's going to be a challenge. >>I mean, the fact is, I mean, this is a classic best of breed versus the umbrella play. The thing is, this is nothing new. I mean, this is like the you know, the old days with enterprise applications were basically oracle and ASAP vacuumed up all these. You know, all these applications in their in their ecosystem, whereas with snowflake is. And if you look at the cloud, folks, the hyper scale is still building out their own portfolios as well. Some are, You know, some hyper skills are more partner friendly than others. What? What Snowflake is saying is that we're going to give all of you folks who basically are competing against the hyper skills in various areas like data catalogue and pipelines and all that sort of wonderful stuff will make you basically, you know, all equal citizens. You know the burden is on you to basically we will leave. We will lay out the A P. I s Well, we'll allow you to basically, you know, integrate natively to us so you can provide as good experience. But the but the onus is on your back. >>Should the ecosystem be concerned, as they were back to reinvent 2014 that Amazon was going to nibble away at them or or is it different? >>I find what they're doing is different. Uh, for example, data sharing. They were the first ones out the door were data sharing at a large scale. And then everybody has jumped in and said, Oh, we also do data sharing. All the hyper scholars came in. But now what snowflake has done is they've taken it to the next level. Now they're saying it's not just data sharing. It's up sharing and not only up sharing. You can stream the thing you can build, test deploy, and then monetise it. Make it discoverable through, you know, through your marketplace >>you can monetise it. >>Yes. Yeah, so So I I think what they're doing is they are taking it a step further than what hyper scale as they are doing. And because it's like what they said is becoming like the data operating system You log in and you have all of these different functionalities you can do in machine learning. Now you can do data quality. You can do data preparation and you can do Monetisation. Who do you >>think is snowflakes? Biggest competitor? What do you guys think? It's a hard question, isn't it? Because you're like because we all get the we separate computer from storage. We have a cloud data and you go, Okay, that's nice, >>but there's, like, a crack. I think >>there's uniqueness. I >>mean, put it this way. In the old days, it would have been you know, how you know the prime household names. I think today is the hyper scholars and the idea what I mean again, this comes down to the best of breed versus by, you know, get it all from one source. So where is your comfort level? Um, so I think they're kind. They're their co op a Titian the hyper scale. >>Okay, so it's not data bricks, because why they're smaller. >>Well, there is some okay now within the best of breed area. Yes, there is competition. The obvious is data bricks coming in from the data engineering angle. You know, basically the snowflake coming from, you know, from the from the data analyst angle. I think what? Another potential competitor. And I think Snowflake, basically, you know, admitted as such potentially is mongo >>DB. Yeah, >>Exactly. So I mean, yes, there are two different levels of sort >>of a on a longer term collision course. >>Exactly. Exactly. >>Sort of service now and in salesforce >>thing that was that we actually get when I say that a lot of people just laughed. I was like, No, you're kidding. There's no way. I said Excuse me, >>But then you see Mongo last week. We're adding some analytics capabilities and always been developers, as you say, and >>they trashed sequel. But yet they finally have started to write their first real sequel. >>We have M c M Q. Well, now we have a sequel. So what >>were those numbers, >>Dave? Two thirds. One third. >>So the hyper scale is but the hyper scale urz are you going to trust your hyper scale is to do your cross cloud. I mean, maybe Google may be I mean, Microsoft, perhaps aws not there yet. Right? I mean, how important is cross cloud, multi cloud Super cloud Whatever you want to call it What is your data? >>Shows? Cloud is important if I remember correctly. Our research shows that three quarters of organisations are operating in the cloud and 52% are operating across more than one cloud. So, uh, two thirds of the organisations are in the cloud are doing multi cloud, so that's pretty significant. And now they may be operating across clouds for different reasons. Maybe one application runs in one cloud provider. Another application runs another cloud provider. But I do think organisations want that leverage over the hyper scholars right they want they want to be able to tell the hyper scale. I'm gonna move my workloads over here if you don't give us a better rate. Uh, >>I mean, I I think you know, from a database standpoint, I think you're right. I mean, they are competing against some really well funded and you look at big Query barely, you know, solid platform Red shift, for all its faults, has really done an amazing job of moving forward. But to David's point, you know those to me in any way. Those hyper skills aren't going to solve that cross cloud cloud problem, right? >>Right. No, I'm certainly >>not as quickly. No. >>Or with as much zeal, >>right? Yeah, right across cloud. But we're gonna operate better on our >>Exactly. Yes. >>Yes. Even when we talk about multi cloud, the many, many definitions, like, you know, you can mean anything. So the way snowflake does multi cloud and the way mongo db two are very different. So a snowflake says we run on all the hyper scalar, but you have to replicate your data. What Mongo DB is claiming is that one cluster can have notes in multiple different clouds. That is right, you know, quite something. >>Yeah, right. I mean, again, you hit that. We got to go. But, uh, last question, um, snowflake undervalued, overvalued or just about right >>in the stock market or in customers. Yeah. Yeah, well, but, you know, I'm not sure that's the right question. >>That's the question I'm asking. You know, >>I'll say the question is undervalued or overvalued for customers, right? That's really what matters. Um, there's a different audience. Who cares about the investor side? Some of those are watching, but But I believe I believe that the from the customer's perspective, it's probably valued about right, because >>the reason I I ask it, is because it has so hyped. You had $100 billion value. It's the past service now is value, which is crazy for this student Now. It's obviously come back quite a bit below its IPO price. So But you guys are at the financial analyst meeting. Scarpelli laid out 2029 projections signed up for $10 billion.25 percent free time for 20% operating profit. I mean, they better be worth more than they are today. If they do >>that. If I If I see the momentum here this week, I think they are undervalued. But before this week, I probably would have thought there at the right evaluation, >>I would say they're probably more at the right valuation employed because the IPO valuation is just such a false valuation. So hyped >>guys, I could go on for another 45 minutes. Thanks so much. David. Tony Sanjeev. Always great to have you on. We'll have you back for sure. Having us. All right. Thank you. Keep it right there. Were wrapping up Day two and the Cube. Snowflake. Summit 2022. Right back. Mm. Mhm.
SUMMARY :
What have you seen? And I also think that the native applications as part of the I've heard a lot of data mesh talk this week. seem to get away from it. It seems to be gathering momentum, but But what have you seen? but I think the idea that you can put all the data in one place which, And the thing is that but they I think where they're converging is the idea of operational that the data breaks is put out there. So, for example, you get the, you know, the micro partitioning. I want to ask you as you look around this I mean the ecosystems pretty vibrant. I think it's different and I'll tell you why. But it sure is a lot easier to bring along those people who know sequel already the difference at this, you know, with with snowflake, is that you don't have to worry about taming the zoo. you know, data prep from, you know, don't I still have to have expertise? So so that way, you know, snowflakes done a great job of Did you see anything here that indicated there making strong is the fact that this all these capabilities, you know, a lot of vendors are taking it very seriously I want to ask you guys about this super cloud we Can you become the platform that's hard, hard to predict? I mean, this is like the you know, the old days with enterprise applications You can stream the thing you can build, test deploy, You can do data preparation and you can do We have a cloud data and you go, Okay, that's nice, I think I In the old days, it would have been you know, how you know the prime household names. You know, basically the snowflake coming from, you know, from the from the data analyst angle. Exactly. I was like, No, But then you see Mongo last week. But yet they finally have started to write their first real sequel. So what One third. So the hyper scale is but the hyper scale urz are you going to trust your hyper scale But I do think organisations want that leverage I mean, I I think you know, from a database standpoint, I think you're right. not as quickly. But we're gonna operate better on our Exactly. the hyper scalar, but you have to replicate your data. I mean, again, you hit that. but, you know, I'm not sure that's the right question. That's the question I'm asking. that the from the customer's perspective, it's probably valued about right, So But you guys are at the financial analyst meeting. But before this week, I probably would have thought there at the right evaluation, I would say they're probably more at the right valuation employed because the IPO valuation is just such Always great to have you on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Frank Sullivan | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Tony Blair | PERSON | 0.99+ |
Tony Sanjeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandy | PERSON | 0.99+ |
David McGregor | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
last week | DATE | 0.99+ |
52% | QUANTITY | 0.99+ |
Sanjeev Mohan Sanremo | PERSON | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
2029 projections | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
San James Point | LOCATION | 0.99+ |
$10 billion.25 percent | QUANTITY | 0.99+ |
one application | QUANTITY | 0.99+ |
Odula | PERSON | 0.99+ |
John Kerry | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
Summit 2022 | EVENT | 0.99+ |
Data Warehouse | ORGANIZATION | 0.99+ |
Snowflake | EVENT | 0.98+ |
Scarpelli | PERSON | 0.98+ |
Data Lake | ORGANIZATION | 0.98+ |
one platform | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
today | DATE | 0.98+ |
10 different tables | QUANTITY | 0.98+ |
three quarters | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Apache | ORGANIZATION | 0.97+ |
Day two | QUANTITY | 0.97+ |
DB Inside | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.96+ |
one source | QUANTITY | 0.96+ |
one third | QUANTITY | 0.96+ |
Snowflake Summit 2022 | EVENT | 0.96+ |
One third | QUANTITY | 0.95+ |
two thirds | QUANTITY | 0.95+ |
Claudia | PERSON | 0.94+ |
one time | QUANTITY | 0.94+ |
one cloud provider | QUANTITY | 0.94+ |
Two thirds | QUANTITY | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
data lake | ORGANIZATION | 0.92+ |
Snow Park | LOCATION | 0.92+ |
Cloudera | ORGANIZATION | 0.91+ |
two different levels | QUANTITY | 0.91+ |
three | QUANTITY | 0.91+ |
one cluster | QUANTITY | 0.89+ |
single query | QUANTITY | 0.87+ |
aws | ORGANIZATION | 0.84+ |
first ones | QUANTITY | 0.83+ |
Snowflake summit 2022 | EVENT | 0.83+ |
azure | ORGANIZATION | 0.82+ |
mongo db | ORGANIZATION | 0.82+ |
One | QUANTITY | 0.81+ |
Eunice store | ORGANIZATION | 0.8+ |
wave of | EVENT | 0.78+ |
cloud | ORGANIZATION | 0.77+ |
first real sequel | QUANTITY | 0.77+ |
M c M Q. | PERSON | 0.76+ |
Red shift | ORGANIZATION | 0.74+ |
Anaconda | ORGANIZATION | 0.73+ |
Snowflake | ORGANIZATION | 0.72+ |
ASAP | ORGANIZATION | 0.71+ |
Snow | ORGANIZATION | 0.68+ |
snowflake | TITLE | 0.66+ |
Park | TITLE | 0.64+ |
Cube | COMMERCIAL_ITEM | 0.63+ |
Apache | TITLE | 0.63+ |
Mrr | PERSON | 0.63+ |
senior vice president | PERSON | 0.62+ |
Wall Street | ORGANIZATION | 0.6+ |
Gian Merlino, Imply.io | AWS Startup Showcase S2 E2
(upbeat music) >> Hello, and welcome to theCUBE's presentation of the AWS Startup Showcase: Data as Code. This is Season 2, Episode 2 of the ongoing SaaS covering exciting startups from the AWS ecosystem and we're going to talk about the future of enterprise data analytics. I'm your host, John Furrier and today we're joined by Gian Merlino CTO and co-founder of Imply.io. Welcome to theCUBE. >> Hey, thanks for having me. >> Building analytics apps with Apache Druid and Imply is what the focus of this talk is and your company being showcased today. So thanks for coming on. You guys have been in the streaming data large scale for many, many years of pioneer going back. This past decade has been the key focus. Druid's unique position in that market has been key, you guys been empowering it. Take a minute to explain what you guys are doing over there at Imply. >> Yeah, for sure. So I guess to talk about Imply, I'll talk about Druid first. Imply is a open source based company and Apache Druid is the open source project that the Imply product's built around. So what Druid's all about is it's a database to power analytical applications. And there's a couple things I want to talk about there. The first off is, is why do we need that? And the second is why are we good at, and I'll just a little flavor of both. So why do we need database to power analytical apps? It's the same reason we need databases to power transactional apps. I mean, the requirements of these applications are different analytical applications, apps where you have tons of data coming in, you have lots of different people wanting to interact with that data, see what's happening both real time and historical. The requirements of that kind of application have sort of given rise to a new kind of database that Druid is one example of. There's others, of course, out there in both the open source and non open source world. And what makes Druid really good at it is, people often say what is Druid's big secret? How is it so good? Why is it so fast? And I never know what to say to that. I always sort of go to, well it's just getting all the little details right. It's a lot of pieces that individually need to be engineered, you build up software in layers, you build up a database in layers, just like any other piece of software. And to have really high performance and to do really well at a specific purpose, you kind of have to get each layer right and have each layer have as little overhead as possible. And so just a lot of kind of nitty gritty engineering work. >> What's interesting about the trends over the past 10 years in particular, maybe you can go back 10, 15 years is state of the art database was, stream a bunch of data put it into a pile, index it, interrogate it, get some reports, pretty basic stuff and then all of a sudden now you have with cloud, thousands of databases out there, potentially hundreds of databases living in the wild. So now data with Kafka and Kinesis, these kinds of technologies streaming data's happening in real time so you don't have time to put it in a pile or index it. You want real time analytics. And so perhaps whether they're mobile app, Instagrams of the world, this is now what people want in the enterprise. You guys are the heart of this. Can you talk about that dynamic of getting data quickly at scale? >> So our thinking is that actually both things matter. Realtime data matters but also historical context matters. And the best way to get historical context out of data is to put it in a pile, index it, so to speak, and then the best way to get realtime context to what's happening right now is to be able to operate on these streams. And so one of the things that we do in Druid, I wish I had more time to talk about it but one of the things that we do in Druid is we kind of integrate this real time processing and this historical processing. So we actually have a system that we call the historical system that does what you're saying, take all this data, put in a pile, index it for all your historical data. And we have a system that we call the realtime system that is pulling data in from things like Kafka, Kinesis, getting data pushed into it as the case may be. And this system is responsible for all the data that's recent, maybe the last hour or two of data will be handled by this system and then the older stuff handled by historical system. And our query layer blends these two together seamlessly so a user never needs to think about whether they're querying realtime data or historical data. It's presented as a blended view. >> It's interesting and you know a lot of the people just say, Hey, I don't really have the expertise, and now they're trying to learn it so their default was throw into a data lake. So that brings back that historical. So the rise of the data lake, you're seeing Databricks and others out there doing very well with the data lakes. How do you guys fit into that 'cause that makes it a lot of sense too cause that looks like historical information? >> So data lakes are great technology. We love that kind of stuff. I would say that a really popular pattern, with Druid there's actually two very popular patterns. One is, I would say streaming forward. So stream focus where you connect up to something like Kafka and you load data to stream and then we will actually take that data, we'll store all the historical data that came from the stream and instead of blend those two together. And another other pattern that's also very common is the data lake pattern. So you have a data lake and then you're sort of mirroring that data from the data lake into Druid. This is really common when you have a data lake that you want to be able to build an application on top of, you want to say I have this data in the data lake, I have my table, I want to build an application that has hundreds of people using it, that has really fast response time, that is always online. And so when I mirror that data into Druid and then build my app on top of that. >> Gian take me through the progression of the maturity cycle here. As you look back even a few years, the pioneers and the hardcore streaming data using data analytics at scale that you guys are doing with Druid was really a few percentage of the population doing that. And then as the hyperscale became mainstream, it's now in the enterprise, how stable is it? What's the current state of the art relative to the stability and adoption of the techniques that you guys are seeing? >> I think what we're seeing right now at this stage in the game, and this is something that we kind of see at the commercial side of Imply, what we're seeing at this stage of the game is that these kinds of realization that you actually can get a lot of value out of data by building interactive apps around it and by allowing people to kind of slice and dice it and play with it and just kind of getting out there to everybody, that there is a lot of value here and that it is actually very feasible to do with current technology. So I've been working on this problem, just in my own career for the past decade, 10 years ago where we were is even the most high tech of tech companies were like, well, I could sort of see the value. It seems like it might be difficult. And we're kind of getting from there to the high tech companies realizing that it is valuable and it is very doable. And I think that was something there was a tipping point that I saw a few years ago when these Druid and database like really started to blow up. And I think now we're seeing that beyond sort of the high tech companies, which is great to see. >> And a lot of people see the value of the data and they see the application as data as code means the application developers really want to have that functionality. Can you share the roadmap for the next 12 months for you guys on what's coming next? What's coming around the corner? >> Yeah, for sure. I mentioned during the Apache open source community, different products we're one member of that community, very prominent one but one member so I'll talk a bit about what we're doing for the Druid project as part of our effort to make Druid better and take it to the next level. And then I'll talk about some of the stuff we're doing on the, I guess, the Druid sort of commercial side. So on the Druid side, stuff that we're doing to make Druid better, take it to the next level, the big thing is something that we really started writing about a few weeks ago, the multi-stage query engine that we're working on, a new multi-stage query engine. If you're interested, the full details are on blog on our website and also on GitHub on Apache Druid GitHub, but short version is Druid's. We're sort of extending Druid's Query engine to support more and varied kinds of queries with a focus on sort of reporting queries, more complex queries. Druid's core query engine has classically been extremely good at doing rapid fire queries very quickly, so think thousands of queries per second where each query is maybe something that involves a filter in a group eye like a relatively straightforward query but we're just doing thousands of them constantly. Historically folks have not reached for technologies like Druid is, really complex and a thousand line sequel queries, complex supporting needs. Although people really do need to do both interactive stuff and complex stuff on the same dataset and so that's why we're building out these capabilities in Druid. And then on the implied commercial side, the big effort for this year is Polaris which is our cloud based Druid offering. >> Talk about the relationship between Druid and Imply? Share with the folks out there how that works. >> So Druid is, like I mentioned before, it's Apache Druid so it's a community based project. It's not a project that is owned by Imply, some open source projects are sort of owned or sponsored by a particular organization. Druid is not, Druid is an independent project. Imply is the biggest contributor to Druid. So the imply engineering team is contributing tons of stuff constantly and we're really putting a lot of the work in to improve Druid although it is a community effort. >> You guys are launching a new SaaS service on AWS. Can you tell me about what that's happening there, what it's all about? >> Yeah, so we actually launched that a couple weeks ago. It's called Polaris. It's very cool. So historically there's been two ways, you can either get started with Apache Druid, it's open source, you install it yourself, or you can get started with Imply Enterprise which is our enterprise offering. And these are the two ways you can get started historically. One of the issues of getting started with Apache Druid is that it is a very complicated distributed database. It's simple enough to run on a single server but once you want to scale things out, once you get all these things set up, you may want someone to take some of that operational burden off your hands. And on the Imply Enterprise side, it says right there in the name, it's enterprise product. It's something that may take a little bit of time to get started with. It's not something you can just roll up with a credit card and sign up for. So Polaris is really about of having a cloud product that's sort of designed to be really easy to get started with, really self-service that kind of stuff. So kind of providing a really nice getting started experience that does take that maintenance burden and operational burden away from you but is also sort of as easy to get started with as something that's database would be. >> So a more developer friendly than from an onboarding standpoint, classic. >> Exactly. Much more developer friendly is what we're going for with that product. >> So take me through the state of the art data as code in your mind 'cause infrastructure is code, DevOps has been awesome, that's cloud scale, we've seen that. Data as Code is a term we coined but means data's in the developer process. How do you see data being integrated into the workflow for developers in the future? >> Great question. I mean all kinds of ways. Part of the reason that, I kind of alluded to this earlier, building analytical applications, building applications based on data and based on letting people do analysis, how valuable it is and I guess to develop in that context there's kind of two big ways that we sort of see these things getting pushed out. One is developers building apps for other people to use. So think like, I want to build something like Google analytics, I want to build something that clicks my web traffic and then lets the marketing team slice and dice through it and make decisions about how well the marketing's doing. You can build something like that with databases like Druid and products like what we're having in Imply. I guess the other way is things that are actually helping developers do their own job. So kind of like use your own product or use it for yourself. And in this world, you kind of have things like... So going beyond what I think my favorite use case, I'll just talk about one. My favorite use case is so I'm really into performance, I spend the last 10 years of my life working on high performance database so obviously I'm into this kind of stuff. I love when people use our product to help make their own products faster. So this concept of performance monitoring and performance management for applications. One thing that I've seen some of our customers do and some of our users do that I really love is when you kind of take that performance data of your own app, as far as it can possibly go take it to the next level. I think the basic level of using performance data is I collect performance data from my application deployed out there in the world and I can just use it for monitoring. I can say, okay my response times are getting high in this region, maybe there's something wrong with that region. One of the very original use cases for Druid was that Netflix doing performance analysis, performance analysis more exciting than monitoring because you're not just understanding that there's a performance, is good or bad in whatever region sort of getting very fine grain. You're saying in this region, on this server rack for these devices, I'm seeing a degradation or I'm seeing a increase. You can see things like Apple just rolled out a new version of iOS and on that new version of iOS, my app is performing worse than the older version. And even though not many devices are on that new version yet I can kind of see that because I have the ability to get really deep in the data and then I can start slicing nice that more. I can say for those new iOS people, is it all iOS devices? Is it just the iPhone? Is it just the iPad? And that kind of stuff is just one example but it's an example that I really like. >> It's kind of like the data about the data was always good to have context, you're like data analytics for data analytics to see how it's working at scale. This is interesting because now you're bringing up the classic finding the needle in the haystack of needles, so to speak where you have so much data out there like edge cases, edge computing, for instance, you have devices sending data off. There's so much data coming in, the scale is a big issue. This is kind of where you guys seem to be a nice fit for, large scale data ingestion, large scaled data management, large scale data insights kind of all rolled in to one. Is that kind of-? >> Yeah, for sure. One of the things that we knew we had to do with Druid was we were building it for the sort of internet age and so we knew it had to scale well. So the original use case for Druid, the very first one that we ended up building for, the reason we build in the first place is because that original use case had massive scale and we struggled finding something, we were literally trying to do what we see people doing now which is we're trying to build an app on a massive data set and we're struggling to do it. And so we knew it had to scale to massive data sets. And so that's a little flavor of kind know how that works is, like I was mentioning earlier this, this realtime system and historical system, the realtime system is scalable, it's scalable out if you're reading from Kafka, we scale out just like any other Kafka consumer. And then the historical system is all based on what we call segments which are these files that has a few million rows per file. And a cluster is really big, might have thousands of servers, millions of segments, but it's a design that is kind of, it's a design that does scale to these multi-trillion road tables. >> It's interesting, you go back when you probably started, you had Twitter, Netflix, Facebook, I mean a handful of companies that were at the scale. Now, the trend is you're on this wave where those hyperscalers and, or these unique huge scale app companies are now mainstream enterprise. So as you guys roll out the enterprise version of building analytics and applications, which Druid and Imply, they got to going to get religion on this. And I think it's not hard because it's distributed computing which they're used to. So how is that enterprise transition going because I can imagine people would want it and are just kicking the tires or learning and then trying to put it into action. How are you seeing the adoption of the enterprise piece of it? >> The thing that's driving the interest is for sure doing more and more stuff on the internet because anything that happens on the internet whether it's apps or web based, there's more and more happening there and anything that is connected to the internet, anything that's serving customers on the internet, it's going to generate an absolute mountain of data. And the only question is not if you're going to have that much data, you do if you're doing anything on the internet, the only question is what are you going to do with it? So that's I think what drives the interest, is people want to try to get value out of this. And then what drives the actual adoption is I think, I don't want to necessarily talk about specific folks but within every industry I would say there's people that are leaders, there's organizations that are leaders, teams that are leaders, what drives a lot of interest is seeing someone in your own industry that has adopted new technology and has gotten a lot of value out of it. So a big part of what we do at Imply is that identify those leaders, work with them and then you can talk about how it's helped them in their business. And then also I guess the classic enterprise thing, what they're looking for is a sense of stability, a sense of supportability, a sense of robustness and this is something that comes with maturity. I think that the super high tech companies are comfortable using some open source software that's rolled off the presses a few months ago; he big enterprises are looking for something that has corporate backing, they're looking for something that's been around for a while and I think that Druid technologies like it are breaching that little maturity right now. >> It's interesting that supply chain has come up in the software side. That conversation is a lot now, you're hearing about open source being great, but in the cloud scale, you can get the data in there to identify opportunities and also potentially vulnerabilities is big discussion. Question for you on the cloud native side, how do you see cloud native, cloud scale with services like serverless Lambda, edge merging, it's easier to get into the cloud scale. How do you see the enterprise being hardened out with Druid and Imply? >> I think the cloud stuff is great, we love using it to build all of our own stuff, our product is of course built on other cloud technologies and I think these technologies built on each other, you sort of have like I mentioned earlier, all software is built in layers and cloud architecture is the same thing. What we see ourselves as doing is we're building the next layer of that stack. So we're building the analytics database layer. You saw when people first started doing these in public cloud, the very first two services that came out you can get a virtual machine and you can store some data and you can retrieve that data but there's no real analytics on it, there's just kind of storage and retrieval. And then as time goes on higher and higher levels get built out delivering more and more value and then the levels mature as they go up. And so the the bottom of layers are incredibly mature, the top most layers are cutting edge and there's a kind of a maturity gradient between those two. And so what we're doing is we're building out one of those layers. >> Awesome extraction layers, faster performance, great stuff. Final question for you, Gian, what's your vision for the future? How do you Imply and Druid it going? What's it look like five years from now? >> I think that for sure it seems like that there's two big trends that are happening in the world and it's going to sound a little bit self serving for me to say it but I believe what we're doing here says, I'm here 'cause I believe it, I believe in open source and I believe in cloud stuff. That's why I'm really excited that what we're doing is we're building a great cloud product based on a great open source project. I think that's the kind of company that I would want to buy from if I wasn't at this company and I was just building something, I would want to buy a great cloud product that's backed by a great open source project. So I think the kind of the way I see the industry going, the way I see us going and I think would be a great place to end up just kind of as an engineering world, as an industry is a lot of these really great open source projects doing things like what Kubernetes doing containers, we're doing with analytics et cetera. And then really first class really well done cloud versions of each one of them and so you can kind of choose, do you want to get down and dirty with the open source or do you want to choose just kind of have the abstraction of the cloud. >> That's awesome. Cloud scale, cloud flexibility, community getting down and dirty open source, the best of both worlds. Great solution. Goin, thanks for coming on and thanks for sharing here in the Showcase. Thanks for coming on theCUBE. >> Thank you too. >> Okay, this is theCUBE Showcase Season 2, Episode 2. I'm John Furrier, your host. Data as Code is the theme of this episode. Thanks for watching. (upbeat music)
SUMMARY :
of the AWS Startup Showcase: Data as Code. Take a minute to explain what you guys are And the second is why are we good at, Instagrams of the world, And so one of the things know a lot of the people data that came from the of the art relative to the that beyond sort of the the next 12 months for you So on the Druid side, Talk about the relationship Imply is the biggest contributor to Druid. Can you tell me about what And on the Imply Enterprise side, So a more developer friendly than from we're going for with that product. means data's in the developer process. I have the ability to get It's kind of like the One of the things that of the enterprise piece of it? I guess the classic enterprise thing, but in the cloud scale, And so the the bottom of How do you Imply and Druid it going? and so you can kind of choose, here in the Showcase. Data as Code is the theme of this episode.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Furrier | PERSON | 0.99+ |
Gian Merlino | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
two ways | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each layer | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
millions | QUANTITY | 0.99+ |
Druid | TITLE | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Imply | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
each query | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
ORGANIZATION | 0.98+ | |
Gian | PERSON | 0.98+ |
Kafka | TITLE | 0.98+ |
Imply.io | ORGANIZATION | 0.97+ |
one example | QUANTITY | 0.97+ |
first two services | QUANTITY | 0.97+ |
hundreds of people | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
two big ways | QUANTITY | 0.97+ |
10 years ago | DATE | 0.96+ |
past decade | DATE | 0.96+ |
first class | QUANTITY | 0.96+ |
one member | QUANTITY | 0.96+ |
Lambda | TITLE | 0.96+ |
two big trends | QUANTITY | 0.96+ |
Apache | ORGANIZATION | 0.95+ |
both worlds | QUANTITY | 0.95+ |
Polaris | ORGANIZATION | 0.95+ |
one member | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
Mark Hill, Digital River and Dave Vellante with closing thoughts
(upbeat music) >> Dave Vellante: Okay. We're back with Mark Hill. who's the Director of IT Operations at Digital River. Mark. Welcome to the cube. Good to see you. Thanks for having me. I really appreciate it. >> Hey, tell us a little bit more about Digital River, people know you as a, a payment platform, you've got marketing expertise. How do you differentiate from other e-commerce platforms? >> Well, I don't think people realize it, but Digital River was founded about 27 years ago. Primarily as a one-stop shop for e-commerce right? And so we offered site development, hosting, order management, fraud, expert controls, tax, um, physical and digital fulfillment, as well as multilingual customer service, advanced reporting and email marketing campaigns, right? So it was really just kind of a broad base for e-commerce. People could just go there. Didn't have to worry about anything. What we found over time as e-commerce has matured, we've really pivoted to a more focused API offering, specializing in just our global seller services. And to us that means payment, fraud, tax, and compliance management. So our, our global footprint allows companies to outsource that risk management and expand their markets internationally, um very quickly. And with low cost of entry. >> Yeah. It's an awesome business. And, you know, to your point, you were founded way before there was such a thing as the modern cloud, and yet you're a cloud native business. >> Yeah. >> Which I think talks to the fact that, that incumbents can evolve. They can reinvent themselves from a technology perspective. I wonder if you could first paint a picture of, of how you use the cloud, you use AWS, you know, I'm sure you got S3 in there. Maybe we could talk about that a little bit. >> Yeah, exactly. So when I think of a cloud native business, you kind of go back to the history. Well, 27 years ago, there wasn't a cloud, right? There wasn't any public infrastructure. It was, we basically stood our own data center up in a warehouse. And so over our history, we've managed our own infrastructure and collocated data centers over time through acquisitions and just how things worked. You know those over 10 data centers globally. for us it was expensive, well from a software hardware perspective, as well as, you know, getting the operational teams and expertise up to up to speed too. So, and it was really difficult to maintain and ultimately not core to our business, right? Nowhere in our mission statement, does it say that we're our goal is to manage data centers? So, so about five years ago, we started the journey from our hosted into AWS. It was a hundred percent lift it and shift plan, and we were able to bleed that migration a little over two years, right. Amazon really just fit for us. It was a natural, a natural place for us to land and they made it really easy here for us to not to say it wasn't difficult, but, but once in the public cloud, we really adopted a cloud first vision. Meaning that we'll not only consume their infrastructure as the service, but we'll also purposely evaluate and migrate to software as a service. So I come from a database background. So an example would be migrating from self deployed and managed relational databases over to AWS RDS, relational database service. You know, you're able to utilize the backups, the standby and the patching tools. Automagically, you know, with a click of the button. And that's pretty cool. And so we moved away from the time consuming operational tasks and, and really put our resources into revenue and generate new products, you know, like pivoting to an API offering. I always like to say that we stopped being busy and started being productive. >> Ha ha. I love that. >> That's really what the cloud has done for us. >> Is that you mean by cloud native? I mean, being able to take advantage of those primitives and native API. So what does that mean for your business? >> Yeah, exactly. I think, well, the first step for us was just to consume the infrastructure right, in that, but now we're looking at targeted services that they have in there too. So, you know, we have our, our, our data stream of services. So log analytics, for example, we used to put it locally on the machine. Now we're just dumping into an S3 bucket and we're using Kinesis to consume that data, put it in Eastic and go from there. And none of the services are managed by Digital River. We're just utilizing the capabilities that AWS has there too. So. >> And as an e-commerce player, retail company, we were ever concerned about moving to AWS as a possible competitor, or did you look at other clouds? What can you tell us about that? >> Yeah. And, and so I think e-commerce has really matured, right? And so we, we got squeezed out by the Amazons of the world. It's just not something that we were doing, but we had really a good area of expertise with our global seller services. But so we evaluated Microsoft. We evaluated AWS as well as Google. And, you know, back when we did that, Microsoft was Windows-based. Google was just coming into the picture, really didn't fit for what we were doing, but Amazon was just a natural fit. So we made a business decision, right? It was financially really the best decision for us. And so we didn't really put our feelings into it, right? We just had to move forward and it's better than where we're at. And we've been delighted actually. >> Yeah. It makes sense. Best cloud, best, best tech. >> Yeah. >> Yeah. I want to talk about ChaosSearch. A lot of people describe it as a data lake for log analytics. Do you agree with that? You know, what does that, what does that even mean? >> Well, from, from our perspective, because they're self-managed solutions were costly and difficult to maintain, you know, we had older versions of self deployed using Splunk, other things like that, too. So over time, we made a conscious decision to limit our data retention in generally seven days. But in a lot of cases, it was zero. We just couldn't consume that, that log data because of the cost, intimidating in itself, because of this limit, you know, we've lost important data points use for incident triage, problem management, problem management, trending, and other things too. So ChaosSearch has offered us a manageable and cost-effective opportunity to store months, or even years of data that we can use for operations, as well as trending automation. And really the big thing that we're pushing into is an event driven architecture so that we can proactively manage our services. >> Yeah. You mentioned Elastic, I know I've talked to people who use the ELK Stack. They say you there's these exponential growth in the amount of data. So you have to cut it off at whatever. I think you said seven days or, or less you're saying, you're not finding that with, with ChaosSearch? >> Yeah. Yeah, exactly. And that was one of the huge benefits here too. So, you know, we were losing out if there was a lower priority incident, for example, and people didn't get to it until eight, nine days later. Well, all the breadcrumbs are gone. So it was really just kind of a best guess or the incident really wasn't resolved. We didn't find a root cause. >> Yeah. Like my video camera down there. My, you know, my other house, somebody breaks in and I don't find out for, for two weeks and then the video's gone. That kind of same thing. >> Yep So, so, so how do you, can you give us some more detail on how you use your data lake and ChaosSearch specifically? >> Yeah, yeah. Yep. And, and so there's, there's many different areas, but what we found is we were able to easily consolidate data from multiple regions, into a single pane of glass to our customers. So internally and externally, you know, it relieves us of that operational support for the data extract transformation load process, right? It offered us also a seamless transition for the users, who were familiar with ElasticSearch, right? It wasn't, it wasn't difficult to move over. And so all these are a lot of selling points, benefits. And, and so now that we have all this data that we're able to, to capture and utilize, it gives us an opportunity to use machine learning, predictive analysis. And like I said, you know, driving to an event driven architecture. >> Okay. >> So that's, that's really what it's offered. And it's, it's been a huge benefit. >> So you're saying that you can speak the language of Elastic. You don't have to move the data out of an S3 bucket and you can scale more easily. Is that right? >> Yeah, yeah, absolutely. And, so for us, just because we're running in multiple regions to drive more high availability, having that data available from multiple regions in a single pane of glass or a single way to utilize it, is a huge benefit as well. Just, you know, not to mention actually having the data. >> What was the initial catalyst to sort of rethink what you were doing with log analytics? Was it cost? Was it flexibility? Scale? >> There was, I think all of those went into it. One of the main drivers. So, so last year we had a huge project, so we have our ELK Stack and it's probably from a decade ago, right? And, you know, a version point oh two or something, you know, anyways, it's a very old, and we went through a whole project to get that upgraded and migrated over. And it was just, we found it impossible internally to do, right? And so this was a method for us to get out of that business, to get rid of the security risks, the support risk, and have a way for people to easily migrate over. And it was just a nightmare here, consolidating the data across regions. And so that was, that was a huge thing, but yeah, it was also been the cost, right? It was, we were finding it cheaper to use ChaosSearch and have more data available versus what we're doing currently in AWS. >> Got it. I wonder if you could, you could share maybe any stories that you have or examples that, that underscore the impact that this approach to analytics is having on your business, maybe your team's everyday activities, any, any metrics you can provide or even just anecdotal information. >> Yeah. Yeah. And, and I think, you know, one coming from an Oracle background here, so Digital River historically has been an Oracle shop, right? And we've been developing a reporting and analytics environment on Oracle and that's complicated and expensive, right? We had to use advance features in Oracle, like partitioning materialized views, and bring in other supporting software like Informatica, Hyperion, Sbase, right? And all of these required our large team with a wide set of expertise into these separate focus areas, right? And the amount of data that we were pushing at the ChaosSearch would simply have overwhelmed this legacy method for data analysis than a relational database, right? In that dimension, the human toll of, of the stress of supporting that Oracle environment, meant that a 24 by seven by 365 environment, you know, which requires little or no downtime. So, just that alone, it's a huge thing. So it's allowed us to break away from Oracle, it's allowed us to use new technologies that make sense to solve business solutions. >> I, you know, ChaosSearch is really interesting company to me. I'm sure like me, you see a lot of startups, I'm sure they're knocking on your door every day. And I always like to say, okay, where are they going after? Are they going after a big market? How are they getting product market fit? And it seems like ChaosSearch has really looked at, hard at log analytics and kind of maybe disrupting the ELK Stack. But I see, you know, other potential use cases, you know, beyond analyzing logs. I wonder if you agree, are there other use cases that you see in your future? >> Yeah, exactly. So I think there's, one area would be Splunk, for example, we have that here too. So we use Splunk versus, you know, flat file analysis or other ways to, to capture that data just because from a PCI perspective, it needs to be secured for our compliance and certification, right? So ChaosSearch allows us to do that. There's different types of authentication. Um, really a hodgepodge of authentication that we used in our old environment, but ChaosSearch has a more easily usable one, One that we could set up, one that can really segregate the data and allow us to satisfy our PCR requirements too. So, but Splunk, but I think really deprecating all of our ElasticSearch environments are homegrown ones, but then also taking a hard look at what we're doing with relational databases, right? 27 years ago, there was only relational databases; Oracle and Sequel Server. So we we've been logging into those types of databases and that's not, cost-effective, it's not supportable. And so really getting away from that and putting the data where it belongs and that was easily accessible in a secure environment and allowing us to, to push our business forward. >> Yep. When you say, where the data belongs, right? It sounds like you're putting it in the bit bucket, S3, leaving it there, because it's the the most cost-effective way to do it and then sort of adding value on top of it. That's, what's interesting about ChaosSearch to me. >> Yeah, exactly. Yup. Yup. Versus the high priced storage, you know, that you have to use for a relational database, you know, and not to mention that the standbys, the backups. So, you know, you're duplicating, triplicating all this data too in an expensive manner, so yeah. Yeah. >> Yeah. Copy. Create. Moving data around and it gets expensive. It's funny when you say about databases, it's true. But database used to be such a boring market. Now it's exploded. Then you had the whole no Sequel movement and Sequel, Sequel became the killer app. You know, it's like full circle, right? >> Yeah, exactly. >> Well, anyway, good stuff, Mark, really, really appreciate you coming on the Cube and, and sharing your perspectives. We'd love to have you back in the future. >> Oh yeah, no problem. Thanks for having me. I really appreciate it. (upbeat music) >> Okay. So that's a wrap. You know, we're seeing a new era in data and analytics. For example, we're moving from a world where data lives in a cloud object store and needs to be extracted, moved into a new data store, transformed, cleansed, structured into a schema, and then analyzed. This cumbersome and expensive process is being revolutionized by companies like ChaosSearch that leave the data in place and then interact with it in a multi-lingual fashion with tooling, that's familiar to analytic pros. You know, I see a lot of potential for this technology beyond just login analytics use cases, but that's a good place to start. You know, really, if I project out into the future, we see a trend of the global data mesh, really taking hold where a data warehouse or data hub or a data lake or an S3 bucket is just a discoverable node on that mesh. And that's governed by an automated computational processes. And I do see ChaosSearch as an enabler of this vision, you know, but for now, if you're struggling to scale with existing tools or you're forced to limit your attention because data is exploding at too rapid a pace, you might want to check these guys out. You can schedule a demo just by clicking the button on the site to do that. Or stop by the ChaosSearch booth at AWS Reinvent. The Cube is going to also be there. We'll have two sets, a hundred guests. I'm Dave Volante. You're watching the Cube, your leader in high-tech coverage.
SUMMARY :
Welcome to the people know you as a, a payment platform, And to us that means payment, fraud, tax, And, you know, to your point, I wonder if you could and generate new products, you know, I love that. That's really what the Is that you mean by cloud native? So, you know, we have our, our, And, you know, Do you agree with that? and difficult to maintain, you know, So you have to cut it off at whatever. So, you know, we were losing out My, you know, my other And, and so now that we have all this data And it's, it's been a huge benefit. and you can scale more Just, you know, not to mention And, you know, a version any stories that you have And, and I think, you know, that you see in your future? use Splunk versus, you know, about ChaosSearch to me. Versus the high priced storage, you know, and Sequel, Sequel became the killer app. We'd love to have you back in the future. I really appreciate it. and needs to be extracted,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Mark Hill | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mark | PERSON | 0.99+ |
Digital River | ORGANIZATION | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
seven days | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Sequel | ORGANIZATION | 0.99+ |
two sets | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
first step | QUANTITY | 0.98+ |
Informatica | ORGANIZATION | 0.98+ |
hundred percent | QUANTITY | 0.98+ |
a decade ago | DATE | 0.98+ |
365 | QUANTITY | 0.98+ |
over two years | QUANTITY | 0.98+ |
27 years ago | DATE | 0.97+ |
ElasticSearch | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
single way | QUANTITY | 0.95+ |
Amazons | ORGANIZATION | 0.95+ |
over 10 data centers | QUANTITY | 0.94+ |
eight | DATE | 0.94+ |
Elastic | TITLE | 0.94+ |
S3 | TITLE | 0.94+ |
seven | QUANTITY | 0.93+ |
first vision | QUANTITY | 0.91+ |
single pane | QUANTITY | 0.9+ |
nine days later | DATE | 0.88+ |
one area | QUANTITY | 0.87+ |
Hyperion | ORGANIZATION | 0.85+ |
Windows | TITLE | 0.83+ |
Cube | COMMERCIAL_ITEM | 0.81+ |
Kinesis | TITLE | 0.79+ |
about five years ago | DATE | 0.77+ |
about 27 years ago | DATE | 0.76+ |
Sbase | ORGANIZATION | 0.74+ |
ELK Stack | COMMERCIAL_ITEM | 0.74+ |
Eastic | LOCATION | 0.73+ |
ChaosSearch | TITLE | 0.72+ |
hundred guests | QUANTITY | 0.72+ |
Splunk | ORGANIZATION | 0.71+ |
S3 | COMMERCIAL_ITEM | 0.71+ |
one- | QUANTITY | 0.7+ |
Elastic | ORGANIZATION | 0.68+ |
Venkat Venkataramani, Rockset & Carl Sjogreen, Seesaw | AWS Startup Showcase
(mid tempo digital music) >> Welcome to today's session of theCUBE' presentation of the AWS startup showcase. This is New Breakthroughs and DevOps, Data Analytics, and Cloud Management Tools. The segment is featuring Rockset and we're going to be talking about data analytics. I'm your host, Lisa Martin, and today I'm joined by one of our alumni, Venkat Venkataramani, the co-founder and CEO of Rockset, and Carl Sjogreen, the co-founder and CPO of Seesaw Learning. We're going to be talking about the fast path to real-time analytics at Seesaw. Guys, Thanks so much for joining me today. >> Thanks for having us >> Thank you for having us. >> Carl, let's go ahead and start with you. Give us an overview of Seesaw. >> Yeah, so Seesaw is a platform that brings educators, students, and families together to create engaging and learning experiences. We're really focused on elementary aged students, and have a suite of creative tools and engaging learning activities that helps get their learning and ideas out into the world and share that with family members. >> And this is used by over 10 million teachers and students and family members across 75% of the schools in the US and 150 countries. So you've got a great big global presence. >> Yeah, it's really an honor to serve so many teachers and students and families. >> I can imagine even more so now with the remote learning being such a huge focus for millions and millions across the country. Carl, let's go ahead and get the backstory. Let's talk about data. You've a ton of data on how your product is being used across millions of data points. Talk to me about the data goals that you set prior to using Rockset. >> Yeah, so, as you can imagine with that many users interacting with Seesaw, we have all sorts of information about how the product is being used, which schools, which districts, what those usage patterns look like. And before we started working with Rockset, a lot of data infrastructure was really custom built and cobbled together a bit over the years. We had a bunch of batch jobs processing data, we were using some tools, like Athena, to make that data visible to our internal customers. But we had a very sort disorganized data infrastructure that really as we've grown, we realized was getting in the way of helping our sales and marketing and support and customer success teams, really service our customers in the way that we wanted to past. >> So operationalizing that data to better serve internal users like sales and marketing, as well as your customers. Give me a picture, Carl, of those key technology challenges that you knew you needed to solve. >> Yeah, well, at the simplest level, just understanding, how an individual school or district is using Seesaw, where they're seeing success, where they need help, is a critical question for our customer support teams and frankly for our school and district partners. a lot of what they're asking us for is data about how Seesaw is being used in their school, so that they can help target interventions, They can understand where there is an opportunity to double down on where they are seeing success. >> Now, before you found Rockset, you did consider a more traditional data warehouse approach, but decided against it. Talk to me about the decision why was a traditional data warehouse not the right approach? >> Well, one of the key drivers is that, we are heavy users of DynamoDB. That's our main data store and has been tremendous aid in our scaling. Last year we scaled with the transition to remote learning, most of our metrics by, 10X and Dynamo didn't skip a beat, it was fantastic in that environment. But when we started really thinking about how to build a data infrastructure on top of it, using a sort of traditional data warehouse, a traditional ETL pipeline, it wasn't going to require a fair amount of work for us to really build that out on our own on top of Dynamo. And one of the key advantages of Rockset was that it was basically plug and play for our Dynamo instance. We turned Rockset on, connected it to our DynamoDB and were able within hours to start querying that data in ways that we hadn't before. >> Venkat let's bring you into the conversation. Let's talk about the problems that you're solving for Seesaw and also the complimentary relationship that you have with DynamoDB. >> Definitely, I think, Seesaw, big fan of the product. We have two kids in elementary school that are active users, so it's a pleasure to partner with Seesaw here. If you really think about what they're asking for, what Carl's vision was for their data stack. The way we look at is business observability. They have many customers and they want to make sure that they're doing the right thing and servicing them better. And all of their data is in a very scalable, large scale, no SEQUEL store like DynamoDB. So it makes it very easy for you to build applications, but it's very, very hard to do analytics on it. Rockset had comes with all batteries included, including real-time data connectors, with Amazon DynamoDB. And so literally you can just point Rockset at any of your Dynamo tables, even though it's a no SEQUEL store, Rockset will in real time replicate the data and automatically convert them into fast SEQUEL tables for you to do analytics on. And so within one to two seconds of data getting modified or new data arriving in DynamoDB from your application, within one to two seconds, it's available for query processing in Rockset with full feature SEQUEL. And not just that, I think another very important aspect that was very important for Seesaw is not just that they wanted me to do batch analytics. They wanted their analytics to be interactive because a lot of the time we just say something is wrong. It's good to know that, but oftentimes you have a lot more followup questions. Why is it wrong? When did it go wrong? Is it a particular release that we did? Is it something specific to the school district? Are they trying to use some part of the product more than other parts of the product and struggling with it? Or anything like that. It's really, I think it comes down to Seesaw's and Carl's vision of what that data stack should serve and how we can use that to better serve the customers. And Rockset's indexing technology, and whatnot allows you to not only get real-time in terms of data freshness, but also the interactivity that comes in ad-hoc drilling down and slicing and dicing kind of analytics that is just our bread and butter . And so that is really how I see not only us partnering with Seesaw and allowing them to get the business observerbility they care about, but also compliment Dynamo transactional databases that are massively scalable, born in the cloud, like DynamoDB. >> Carl talked to me about that complimentary relationship that Venkat just walked us through and how that is really critical to what you're trying to deliver at Seesaw. >> Yeah, well, just to reiterate what Venkat said, I think we have so much data that any question you ask about it, immediately leads to five other questions about it. We have a very seasonal business as one example. Obviously in the summertime when kids aren't in school, we have very different usage patterns, then during this time right now is our critical back to school season versus a steady state, maybe in the middle of the school year. And so really understanding how data is trending over time, how it compares year over year, what might be driving those things, is something that frankly we just haven't had the tools to really dig into. There's a lot about that, that we are still beginning to understand and dig into more. And so this iterative exploration of data is incredibly powerful to expose to our product team, our sales and marketing teams to really understand where Seesaw's working and where we still have work do with our customers. And that's so critical to us doing a good job for schools in districts. >> And how long have you been using Rockset, Carl? >> It's about six months now, maybe a little bit longer. >> Okay, so during the pandemic. So talk to me a little bit about in the last 18 months, where we saw the massive overnight transition to remote learning and there's still a lot of places that are in that or a hybrid environment. How critical was it to have Rockset to fuel real-time analytics interactivity, particularly in a very challenging last 18 month time period? >> The last 18 months have been hard for everyone, but I think have hit teachers and schools maybe harder than anyone, they have been struggling with. And then, overnight transition to remote learning challenges of returning to the classroom hybrid learning, teachers and schools are being asked to stretch in ways they have never been stretched before. And so, our real focus last year was in doing whatever we could to help them manage those transitions. And data around student attendance in a remote learning situation, data around which kids were completing lessons and which kids weren't, was really critical data to provide to our customers. And a lot of our data infrastructure had to be built out to support answering those questions in this really crazy time for schools. >> I want to talk about the data set, but I'd like to go back to Venkat 'cause what's interesting about this story is Seesaw is a customer of Rockset, Venkat, is a customer of Seesaw. Talk to me Venkat about how this has been helpful in the remote learning that your kids have been going through the last year and a half. >> Absolutely. I have two sons, nine and ten year olds, and they are in fourth and fifth grade now. And I still remember when I told them that Seesaw is considering using Rockset for the analytics, they were thrilled, they were overjoyed because finally they understood what I do for a living. (chuckling) And so that was really amazing. I think, it was a fantastic dual because for the first time I actually understood what kids do at school. I think every week at the end of the week, we would use Seesaw to just go look at, "Hey, well, let's see what you did last week." And we would see not only what the prompts and what the children were doing in the classroom, but also the comments from the educators, and then they comment back. And then we were like, "Hey, this is not how you speak to an educators." So it was really amazing to actually go through that, and so we are very, very big fans of the product, we really look forward to using it, whether it is remote learning or not, we try to use it as a family, me, my wife and the kids, as much as possible. And it's a very constant topic of conversation, every week when we are working with the kids and seeing how we can help them. >> So from an observability perspective, it sounds like it's giving parents and teachers that visibility that really without it, you don't get. >> That's absolutely correct . I think the product itself is about making connections, giving people more visibility into things that are constantly happening, but you're not in the know. Like, before Seesaw, I used to ask the kids, "How was school today? "what happened in the class?" And they'll say, "It was okay." It would be a very short answer, it wouldn't really have the depth that we are able to get from Seesaw. So, absolutely. And so it's only right that, that level of observability and that level of... Is also available for their business teams, the support teams so that they can also service all the organizations that Seesaw's working with, not only the parents and the educators and the students that are actually using the product. >> Carl, let's talk about that data stack And then I'm going to open the can on some of those impacts that it's making to your internal folks. We talked about DynamoDB, but give me an visual audio, visual picture of the data stack. >> Yeah. So, we use DynamoDB as our database of record. We're now in the process of centralizing all of our analytics into Rockset. So that rather than having different BaaS jobs in different systems, querying that data in different ways, trying to really set Rockset up as the source of truth for analytics on top of Dynamo. And then on top of Rockset, exposing that data, both to internal customers for that interactive iterative SEQUEL style queries, but also bridging that data into the other systems our business users use. So Salesforce, for example, is a big internal tool and have that data now piped into Salesforce so that a sales rep can run a report on a prospect to reach out to, or a customer that needs help getting started with Seesaw. And it's all plumbed through the Rockset infrastructure. >> From an outcome standpoint, So I mentioned sales and marketing getting that visibility, being able to act on real time data, how has it impacted sales in the last year and a half? six months rather since , it's now since months using it. >> Well, I don't know if I can draw a direct line between those things, but it's been a very busy year for Seesaw, as schools have transitioned to remote learning. And our business is really largely driven by teachers discovering our free product, finding it valuable in their classroom, and then asking their school or district leadership to purchase a school wide subscription. It's a very bottoms up sales motion. And so data on where teachers are starting to use Seesaw is the key input into our sales and marketing discussions with schools and districts. And so understanding that data quickly in real time is a key part of our sales strategy and a key part of how we grow at Seesaw over time. >> And it sounds like Rockset is empowering those users, the sales and marketing folks to really fine tune their interactions with existing customers, prospective customers. And I imagine you on the product side in terms of tuning the product. What are some of the things Carl that you've learned in the last six months that have helped you make better decisions on what you want Seesaw to deliver in the future? >> Well, one of the things that I think has been really interesting is how usage patterns have changed between the classroom and remote learning. We saw per student usage of Seesaw increased dramatically over the past year, and really understanding what that means for how the product needs to evolve to better meet teacher needs, to help organize that information, since it's now a lot more of it, really helped motivate our product roadmap over the last year. We launched a new progress dashboard that helps teachers get an added glance view of what's happening in their classroom. That was really in direct response to the changing usage patterns, that we were able to understand with better insights into data. >> And those insights allow you to pivot and iterate on the product. Venkat I want to just go back to the AWS relationship for a second. You both talked about the complimentary nature of Rockset and DynamoDB. Here we are at the AWS Startup Showcase. Venkat just give the audience a little overview of the partnership that you guys have with AWS. >> Rockset fully runs on AWS, so we are customer of AWS. We are also a partner. There are lots of amazing cloud data products that AWS has, including DynamoDB or AWS Kinesis. And so one with which we have built in integrations. So if you're managing data in AWS, we compliment and we can provide, very, very fast interactive real-time analytics on all of your datasets. So the partnership has been wonderful, we're very excited to be in the Startup Showcase. And so I hope this continuous for years to come. >> Let's talk about the synergies between a Rockset and Seesaw for a second. I know we talked about the huge value of real time analytics, especially in today's world, where we've learned many things in the last year and a half, including that real-time analytics is no longer a nice to have for a lot of industries, 'cause I think Carl as you said, if you can't get access to the data, then there's questions we can't ask. Or we can't iterate on operations, if we wait seconds for every query to load, then there's questions we can't ask. Talk to me Venkat, about how Rockset is benefiting from what you're learning from Seesaw's usage of the technology? >> Absolutely. I mean, if you go to the first part of the question on why do businesses really go after real time. What is the drive here? You might have heard the phrase, the world is going from batch to real-time. What does it really mean? What's the driving factor there? Our take on it is, I think it's about accelerating growth. Seesaw's product being amazing and it'll continue to grow, it'll continue to be a very, very important product in the world. With or without Rockset, that will be true. The way we look at once they have real-time business observability, is that inherent growth that they have, they can reach more people, they can put their product in the hands of more and more people, they can iterate faster. And at the end of the day, it is really about having this very interesting platform, very interesting architecture to really make a lot more data driven decisions and iterate much more quickly. And so in batch analytics, if you were able to make, let's say five decisions a quarter, in real time analytics you can make five decisions a day. So that's how we look at it. So that is really, I think, what is the underpinnings of why the world is going from batch to real time. And what have we learned from having a Seesaw as a customer? I think Seesaw has probably one of the largest DynamoDB installations that we have looked at. I think, we're talking about billions and billions of records, even though they have tens of millions of active users. And so I think it has been an incredible partnership working with them closely, and they have had a tremendous amount of input on our product roadmap and some of that like role-based access control and other things have already being a part of the product, thanks to the continuous feedback we get from their team. So we're delighted about this partnership and I am sure there's more input that they have, that we cannot wait to incorporate in our roadmap. >> I imagine Venkat as well, you as the parent user and your kids, you probably have some input that goes to the Seesaw side. So this seems like a very synergistic relationship. Carl, a couple more questions for you. I'd love to know how in this... Here we are kind of back to school timeframe, We've got a lot of students coming back, they're still remote learning. What are some of the things that you're excited about for this next school year that do you think Rockset is really going to fuel or power for Seesaw? >> Yeah, well, I think schools are navigating yet another transition now, from a world of remote learning to a world of back to the classroom. But back to the classroom feels very different than it does at any other back to school timeframe. Many of our users are in first or second grade. We serve early elementary age ranges and some of those students have never been in a classroom before. They are entering second grade and never having been at school. And that's hard. That's a hard transition for teachers in schools to make. And so as a partner to those schools, we want to do everything we can to help them manage that transition, in general and with Seesaw in particular. And the more we can understand how they're using Seesaw, where they're struggling with Seesaw, as part of that transition, the more we can be a good partner to them and help them really get the most value out of Seesaw, in this new world that we're living in, which is sort of like normal, and in many ways not. We are still not back to normal as far as schools are concerned. >> I'm sure though, the partnership that you provide to the teachers and the students can be a game changer in these, and still navigating some very uncertain times. Carl, last question for you. I want you to point folks to where they can go to learn more about Seesaw, and how for all those parents watching, they might be able to use this with their families. >> Yeah, well, seesaw.me is our website, and you can go to seesaw.me and learn more about Seesaw, and if any of this sounds interesting, ask your teacher, if they're not using Seesaw, to give it a look. >> Seesaw.me, excellent. Venkat, same question for you. Where do you want folks to go to learn more about Rockset and its capabilities? >> Rockset.com is our website. There is a free trial for... $300 worth of free trial credits. It's a self service platform, you don't need to talk to anybody, all the pricing and everything is out there. So, if real-time analytics and modernizing your data stack is on your roadmap, go give it a spin. >> Excellent guys. Thanks so much for joining me today, talking about real-time analytics, how it's really empowering both the data companies and the users to be able to navigate in challenging waters. Venkat, thank you, Carl, thank you for joining us. >> Thanks everyone. >> Thanks Lisa. >> For my guests, this has been our coverage of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics and Cloud Management Tools. I am Lisa Martin. Thanks for watching. (mid tempo music)
SUMMARY :
the fast path to real-time and start with you. out into the world and share across 75% of the schools to serve so many teachers and get the backstory. in the way that we wanted to past. that you knew you needed to solve. to double down on where Talk to me about the decision And one of the key advantages of Rockset that you have with DynamoDB. because a lot of the time we and how that is really critical is our critical back to school season It's about six months now, in the last 18 months, where we saw challenges of returning to the classroom in the remote learning And so that was really amazing. that visibility that really and the students that are And then I'm going to open the can and have that data now in the last year and a half? is the key input into our And I imagine you on the product side for how the product needs to evolve that you guys have with AWS. in the Startup Showcase. in the last year and a half, and it'll continue to grow, that goes to the Seesaw side. And the more we can understand the partnership that you provide and if any of this sounds interesting, to learn more about Rockset all the pricing and both the data companies and the users of the AWS Startup Showcase,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Venkat Venkataramani | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Sjogreen | PERSON | 0.99+ |
Venkat | PERSON | 0.99+ |
Seesaw | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rockset | ORGANIZATION | 0.99+ |
$300 | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Venkat | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.99+ |
two kids | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
five decisions | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
second grade | QUANTITY | 0.99+ |
five other questions | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
Dynamo | ORGANIZATION | 0.99+ |
ten year | QUANTITY | 0.99+ |
150 countries | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
billions | QUANTITY | 0.98+ |
two sons | QUANTITY | 0.98+ |
Sean Knapp, Ascend.io & Jason Robinson, Steady | AWS Startup Showcase
(upbeat music) >> Hello and welcome to today's session, theCUBE's presentation of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics, Cloud Management Tools, featuring Ascend.io for the data and analytics track. I'm your host, John Furrier with theCUBE. Today, we're proud joined by Sean Knapp, CEO and founder of Ascend.io and Jason Robinson who's the VP of Data Science and Engineering at Steady. Guys, thanks for coming on and congratulations, Sean, for the continued success, loves our cube conversation and Jason, nice to meet you. >> Great to meet you. >> Thanks for having us. >> So, the session today is really kind of looking at automating analytics workloads, right? So, and Steady as a customer. Sean, talk about the relationship with the customer Steady. What's the main product, what's the core relationship? >> Yeah, it's a really great question. when we work with a lot of companies like Steady we're working hand in hand with their data engineering teams, to help them onboard onto the Ascend platform, build these really powerful data pipelines, fueling their analytics and other workloads, and really helping to ensure that they can be successful at getting more leverage and building faster than ever before. So we tend to partner really closely with each other's teams and really think of them even as extensions of each other's own teams. I watch in slack oftentimes and our teams just go back and forth. And it's like, as if we were all just part of the same company. >> It's a really exciting time, Jason, great to have you on as a person cutting your teeth into this kind of what I call next gen data as intellectual property. Sean and I chat on theCUBE conversation previous to this event where every company is a data company, right? And we've heard that cliche. >> Right. >> But it's true, right? It's going to, it's getting more powerful with the edge. You seeing more diverse data, faster data, small, big, large, medium, all kinds of different aspects and patterns. And it's becoming a workflow kind of intellectual property paradigm for companies, not so much. >> That's right. >> Just the tech it's the database is you can, it's the data itself, data in flight, it's moving around, it's got value. What's your take-- >> Absolutely. >> On this trend? >> Basically, Steady helps our members and we have a community of members earn more income. So we want to help them steady their financial lives. And that's all based on data, so we have a web app, you could go to the iOS Store, you could go to the Google Play Store, you can download the app. And we have a large number of members, 3 million plus, who are actively using this. And we also have a very exciting new product called income passport. And this helps 1099 and mixed wage earners verify their income, which is very important for different government benefits. And then third, we help people with emergency cash grants as well as awards. So all of that is built on a bedrock of data, so if you're using our apps, it's all data powered. So what you were mentioning earlier from pipelines that are running it real time to yeah, anything, that's a kind of a small data aggregation, we do everything from small to real-time and large. >> You guys are like a multiple sided marketplace here, you've got it, you're a FinTech app, as well as the future of work and with virtual space-- >> That's right. >> Happening now, this is becoming, actually encapsulates kind of the critical problems that people trying to solve right now, you've got multiple stakeholders. >> That's right. >> In the data. >> Yes, we absolutely do. So we have our members, but we also, within the company, we have product, we have strategy, we have a growth team, we have operations. So data engineering and data science also work with a data analytics organization. So at Steady we're very much a data company. And we have a data organization led by our chief data officer and we have data engineering and data science, which are my teams, but also that business insights and analytics. So a lot of what we're building on the data engineering side is powering those insights and analytics that the business stakeholders use every day to run the organization. >> Sean, I want to get your thoughts on this because we heard from Emily Freeman in the keynote about how this revolution in DevOps or for premiering her talk around how, it's not just one persona anymore, I'm a release engineer, I'm this kind of engineer, you're seeing now all engineering, all developers are developers. You have some specialty, but for the most part, the team makeups are changing. We touched on this in our cube conversation. The journey of data is not just the data people, the data folks. It's like there's, they're developers too. So the confluence of data science, data management, developing, is changing the team and cultural makeup of companies. Could you share your thoughts on this dynamic and how it impacts customers? >> Absolutely, I think the, we're finding a similar trend to what we saw a number of years ago, when we talked about how software was eating the world and every company was now becoming a software company. And as a result, we saw this proliferation and expansion of what the software roles look like and thought of a company pulled through this entire new era of DevOps. We were finding that same pattern now emerging around data as not only is every company a software company, every company is a data company and data really is that field, that oil that fuels the business and in doing so, we're finding that as Jason describes it's pervasive across the team, it is no longer just one team that is creating some insights and reports around operational analytics, or maybe a team over here doing data science or machine learning. It is expensive. And I think the really interesting challenges that start to come with this too, are so many data teams are so over capacity. We did a recent study that highlighted that 96% of data teams are at, or over capacity, only 4% had spare capacity. But as a result, the net is being cast even wider to pull in people from even broader and more adjacent domains to all participate in the data future of their organization. >> Yeah, and I think I'd love to get your guys react to this conversation with Andy Jassy, who's now the CEO of Amazon, but when he was the CEO of AWS last year, I talked with him about how the old guard and new guard are thinking around team formations. Obviously team capacity is growing and challenged when you've got the right formula. So that's one thing, right? But what if you don't have the right formula? If you're in the skills gap, problem, or team formation side of it, where you maybe there was two years ago where the mandate came down? Well, we got to build a data team even in two years, if you're not inquisitive. And this is what Andy and I were talking about is the thinking and the mindset of that mission and being open to discovering and understanding the changes, because if you were deciding what your team was two, three years ago, that might have changed a lot. So team capacity, Sean, to your point, if you got it right, and that's a challenge in and of itself, but what if you don't have it, right? What do you guys think about this? >> Yeah, I think that's exactly right. Basically trying to see, look and gaze into the crystal ball and see what's going to happen in a year or two years, even six months is quite difficult. And if you don't have it right, you do spend a lot of time because of the technical debt that you've amassed. And we certainly spend quite a bit of time with technical debt for things we wanted to build. So, deconvolving that, getting those ETLs to a runnable state, getting performance there, that's what we spend a bit of time on. And yeah, it's something that it's really part of the package. >> What do you guys see as the big challenge on teams? The scaling challenge okay. Formation is one thing, Sean, but like, okay, getting it right, getting it formed properly and then scaling it, what are the big things you're seeing? >> One of the, I think the overarching management themes in general, it is the highest out by the highest performing teams are those where the individual with the context and the idea is able to execute as far and as fast and as efficiently as possible, and removing a lot of those encumbrances and put it a slightly different way. If DevOps was all basically boiled down to, how do we help more people write more software faster and safely data ops would be very similarly, how do we enable more people to do more things with data faster and safely? And to do that, I think the era of these massive multi-year efforts around data are gone and hopefully in the not too distant future, even these multi-quarter efforts around data are gone and we get into a much more agile, nimble methodology where smaller initiatives and smaller efforts are possible by more diverse skillsets across the business. And really what we should be doing is leveraging technology and automation to ensure that people are able to be productive and efficient and that we can trust our data and that systems are automated. And these are problems that technology is good at. And so in many ways, how in the early days Amazon would described as getting people out of the muck of DevOps. I think we're going to do the same thing around getting people out of the muck of the data and get them really focused on the higher level aspects. >> Yeah, we're going to get into that complexity, heavy lifting side muck, and then the heavy lifting taking away from the customers. But I want to go back to real quick with Jason while we're on this topic. Jason, I was just curious, how much has your team grown in the recent year and how much could've, should've grown, what's the status and how has Ascend helped you guys? What's the dynamic there? ' Cause that's their value proposition. So, take us through that. >> Absolutely, so, since the beginning of the year data engineering has doubled. So, we're a lean team, we certainly use the agile mindset and methodologies, but we have gone from, yeah, we've essentially doubled. So a lot of that is there's just so much to do and the capacity problem is certainly there. So we also spend a lot of time figuring out exactly what the right tooling is. And I was mentioning the technical debt. So you have those, there's the big O notation of whatever that involves technical debt. And when you're building new things, you're fixing old things. And then you're trying to maintain everything. That scaling starts to hit hard. So even if we continue to double, I mean, we could easily add more data engineers. And a lot of that is, I mean, you know about the hiring cycles, like, a lot of of great talent, but it's difficult to make all of those hires. So, we do spend quite a bit of time thinking about exactly what tools data engineering is using day-to-day. And what I mentioned, were technologies on the streaming side all the way to like the small batch things, but, like something that starts as a small batch getting grow and grow and grow and take, say 15 hours, it's possible, I've seen it. But, and getting that back down and managing that complexity while not overburdening people who probably don't want to spend all their waking hours building ETLs, maintaining ETL, putting in monitoring, putting in alerting, that I think is quite a challenge. >> It's so funny because you mentioned 18 hours, you got to kind of being, you didn't roll your eyes, but you almost did, but this is, but people want it yesterday, they want real time, so there's a lot of demand-- >> Yes. >> On the minds of the business outcome side of it. So, I got to ask you, because this comes up a lot with technical debt, and now we're starting to see that come into the data conversation. And so I always curious, is there a different kind of technical debt with data? Because again, data is like software, but it's a little bit of more elusive in the sense it's always changing. So is there, what kind of technical debt do you see in the data side that's different than say software side? >> Absolutely, now that's a great question. So a lot of thinking about your data and structuring your data and how you want to use that data going into a particular project might be different from what happens after stakeholders have a new considerations and new products and new items that need to be built. So thinking about how that, let's say you have a document store, or you have something that you thought was going to be nice and structured, how that can evolve and support those particular products can essentially, unless you take the time and go through and say, well, let's architect it perfectly so that we can handle that. You're going to make trade-offs and choices, and essentially that debt builds up. So you start cutting corners, you start changing your normalization. You start essentially taking those implicit schema that then tend to build into big things, big implicit schema. And then of course, with implicit schema, you're going to have a lot of null values, you're going to have a lot of items to deal with. So, how do you deal with that? And then you also have the opportunity to create keys and values and oops, do we take out those keys that were slightly misspelled? So, I could go on for hours, but basically the technical debt certainly is there with on data. I see a lot of this as just a spectrum of technical debt, because it's all trade-offs that you made to build a product, and the efficiency has start to hit you. So, the 15 hour ETL, I was mentioning, basically you start with something and you were building things for stakeholders and essentially you have so much complex logic within that. So for the transforms that you're doing from if you're thinking of the bronze, silver, gold, kind of a framework, going from that bronze to a silver, you may have a massive number of transformations or just a few, just to lightly dust it. But you could also go to gold with many more transformations and managing that, managing the complexity, managing what you're spending for servers day after day after day. That's another real challenge of that technical debt stuff. >> That's a great lead into my next question, for Sean, this is the disparate system complexity, technical debt and software was always kind of the belief was, oh yeah, I'll take some technical debt on and work it off once I get visibility and say, unit economics or some sort of platform or tool feature, and then you work it off as fast as possible. I was, this becomes the art and science of technical debt. Jason, what you're saying is that this can be unwieldy pretty quickly. You got state and you got a lot of different inter moving parts. This is a huge issue, Sean, this is where it's, technical debt in the data world is much different architecturally. If you don't get it right, this is a huge, huge issue. Could you aluminate why that is and what you guys are doing to help unify and change some of those conditions? >> Yeah, absolutely. When we think about technical debt and I'll keep drawing some parallels between DevOps and data ops, 'cause I think there's a tremendous number of similarities in these worlds. We used to always have the saying that "Your tech debt grows manually across microservices, "but exponentially within services." And so you want that right level of architecture and composibility if you will, of your systems where you can deploy changes, you can test, you can have high degrees of competence and the roll-outs. And I think the interesting part in the data side, as Jason highlighted, the big O-notation for tech debt in the data ecosystem, is still fairly exponential or polynomial in nature. As right now, we don't have great decomposition of the components. We have different systems. We have a streaming system, we have a databases, we have documents, doors and so on, but how the whole data pipeline data engineering part works generally tends to be pretty monolithic in nature. You take your whole data pipeline and you deploy the whole thing and you basically just cross your fingers, and hopefully it's not 15 hours, but if it is 15 hours, you go to sleep, you wake up the next morning, grab a coffee and then maybe it worked. And that iteration cycle is really slow. And so when we think about how we can improve these things, right? This is combinations of intelligent systems that do instantaneous schema detection, and validation, excuse me, it's combinations of things that do instantaneous schema detection and validation. It's things like automated lineage and dependency tracking. So you know that when you deploy code, what piece of data it affects, it's things like automated testing on individual core parts of your data pipelines to validate that you're getting the expected output that you need. So it's pulling a lot of these same DevOps style principles into the data world, which is really designed to going back to how do you help more people build more things faster and safely really rapid iterations for rapid feedback. So you know if there's breaks in the system much earlier on. >> Well, I think Sean, you're onto something really big there. And I think this is something that's emerging pretty quickly in the cloud scale that I called, 2.0, whatever, what version we're in, is the systems thinking mindset. 'Cause you mentioned the model that that was essentially a silo or subsystem. It was cohesive in it's own way, but now it's been monolithic. Now you have a broken down set of decomposed sets of data pieces that have to work together. So Jason, this is the big challenge that everyone, not really people are talking about, I think most these guys are, and you're using them. What are you unifying? Because this is a systems operating systems thinking, this is not like a database problem. It's a systems problem applied to data where databases are just pieces of it, what's your thoughts? >> That's absolutely right. And I would, so Sean touched on composibility of ETL and thinking about reusable components, thinking about pieces that all fit together, because as you're building something as complex as some of these ETS are, we do think about the platform itself and how that lends to the overarching output. So one thing, being able to actually see the different components of an ETL and blend those in and you as the dry principal, don't repeat yourself. So you essentially are able to take pieces that one person built, maybe John builds a couple of our connectors coming in, Sean also has a bunch of transforms and I just want this stuff out, so I can use a lot of what you guys have already built. I think that's key, because a lot of engineering and data engineering is about managing complexity. So taking that complexity and essentially getting it out fast and getting out error free, is where we're going with all of the data products we're building. >> What are some of the complexity that you guys have that you're dealing with? Can you be specific and share what these guys are doing to solve that problem for you? That's, this is a big problem everyone's having, I'm seeing that all over the place. >> Absolutely, so I could start at a couple of places. So I don't know if you guys are on the three Vs, four Vs or five Vs, but we have all of those. And if you go to that five, four or five V model, there is the veracity piece, which you have to ask yourself, is it true? Is it accurate when? So change happens throughout the pipeline, change can come from web hooks, change can come from users. You have to make sure that you're managing that complexity and what we we're building, I mentioned that we are paying down a lot of tech debt, but we're also building new products. And one pretty challenging, quite challenging ETL that we're building is something going from a document store to an analytical application. So in that document store, we talked about flexible schema. Basically, you don't really know exactly what you're going to get day to day, and you need to be able to manage that change through the whole process in a way that the ultimate business users find value. So, that's one of the key applications that we're using right now. And that's one that the team at Ascend and my team are working hand in hand going through a lot of those challenges. And it's, I also watch the slack just as Sean does, and it's a very active discussion board. So it is essentially like they're just partnering together. It's fabulous, but yeah-- >> And you're seeing kind of a value on this too, I mean, in terms of output what's the business results? >> Yes, absolutely. So essentially this is all, so yes, the fifth V value. So, getting to that value is essentially, there were a few pieces of the, to the value. So there's some data products that we're building within that product and their data science, data analytics based products that essentially do things with the data that help the user. There's also the question of exactly the usage and those kinds of metrics that people in ops want to understand as well as our growth team. So we have internal and external stakeholders for that. >> Jason, this is a great use case, a great customer, Sean, you guys are automating. For the folks watching, who were seeing their peer living the dream here and the data journey, as we say, things are happening. What's the message to customers that you guys want to send because you guys are really cutting your teeth into a whole another level of data engineering, data platform. That's really about the systems view and about cloud. What's the pitch, Sean? What should people know about the company? >> Absolutely, yeah, well, so one, I'd say even before the pitch, I would encourage people to not accept the status quo. And in particular, in data engineering today, the status quo is an incredibly high degree of pain and discomfort. And I think the important part of why Ascend exists and why we're so helpful for our customers, there is a much more automated future of how we build data products, how we optimize those and how we can get a larger cohort of builders into the data ecosystem. And that helps us get out of the muck as we talked about before and put really advanced technology to work for more people inside of our companies to build these data products, leveraging the latest and greatest technologies to drive increased business value faster. >> Jason, what's your assessment of these guys, as people are watching might say, hey, you know what, I'm going to contact them, I need this. How would you talk about Ascend into your peers? >> Absolutely, so I think just thinking about the whole process has been a great partnership. We started with a POC, I think Ascend likes to start with three use cases, I think we came out with four and we went through the ones that we really cared about and really wanted to bring value to the company with. So we have roadmaps for some, as we're paying down technical debt and transitioning, others we can go directly to. And I think that thinking about just like you're saying, John, that systems view of everything you're building, where that makes sense, you can actually take a lot of that complexity and encapsulate it in a way that you can essentially manage it all in that platform. So the Ascend platform has the composibility piece that we touched on. It also, not only can you compose it, but you can drill into it. And my team is super talented and is going to drill into it. So basically loves to open up each of those data flows each of the components therein and has the control there with the combination of Spark Sequel, PI Spark SQL Scala and so on. And I think that the variety of connections is also quite helpful. So thinking about the dry principle from a systems perspective is extremely useful because it's dry, you often get that in a code review, right? I think you can be a little bit more dry here. >> Yeah. >> But you can really do that in the way that you're composing your systems as well. >> That's a great, great point. One quick thing for the folks that they're watching that are trying to figure this out, and a lot of architecture is going on. A lot of people are looking at different solutions. What things have you learned that you could give them a tip like to avoid like maybe some scar tissue or tips of the trade, where you can say, hey, this way, be careful, what's some of the learnings? Could you give a few pointers to folks out there, if they're kicking tires on the direction, what's the wrong direction? What's the right direction look like? >> Absolutely, I think that, I think it through, and I don't know how much time we have that, that feels like a few days conversation as far as ways to go wrong. But absolutely, I think that thinking through exactly where want to be is the key. Otherwise it's kind of like when you're writing a ticket on Jarrah, if you don't have clear success criteria, if you don't know where you going to go, then you'll end up somewhere building something and it might work. But if you think through your exact destination that you want to be at, that will drive a lot of the decisions as you think backwards to where you started. And also I think that, so Sean also mentioned challenging the status quo. I think that you really have to be ready to challenge the status quo at every step of that journey. So if you start with some particular service that you had and its legacy, if it's not essentially performing what you need, then it's okay to just take a step back and say, well, maybe that's not the one. So I think that thinking through the system, just like you were saying, John, and also I think that having a visual representation of where you want to go is critical. So hopefully that encapsulates a lot of it, but yes, the destination is key. >> Yeah, and having an engineering platform that also unifies the multiple components and it's agile. >> That's right. >> It gets you out of the muck and on the last day and differentiate heavy lifting is a cloud plan. >> Absolutely. >> Sean, wrap it up for us here. What's the bumper sticker for your vision, share your founding principles of the company. >> Absolutely, for us, we started the company as a former in recovery and CTO. The last company I founded, we had nearly 60 people on our data team alone and had invested tremendous amounts of effort over the course of eight years. And one of the things that I've learned is that over time innovation comes just as much from deciding what you're no longer going to do as what you're going to do. And focusing heavily around, how do you get out of that muck? How do you continue to climb up that technology stack? Is incredibly important. And so really we are excited to be a part of it and taking the industry is continuing to climb higher and higher level. We're building more and more advanced levels of automation and what we call our data awareness into the automated engine of the Ascend platform that takes us across the entire data ecosystem, connecting and automating all data movement. And so we have a very exciting vision for this fabric that's emerging over time. >> Awesome, Sean, thank you so much for that insight, Jason, thanks for coming on customer of Ascend.io. >> Thank you. >> I appreciate it, gentlemen, thank you. This is the track on automating analytic workloads. We here at the end of us showcase, startup showcase, the hottest companies here at Ascend.io, I'm John Furrier, with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
and Jason, nice to meet you. So, and Steady as a customer. and really helping to ensure great to have you on as a person kind of intellectual property the database is you can, So all of that is built of the critical problems that the business and cultural makeup of companies. and data really is that field, that oil but what if you don't have it, right? that it's really part of the package. What do you guys see as and the idea is able to execute as far grown in the recent year And a lot of that is, I mean, that come into the data conversation. and essentially you have so and then you work it and you basically just cross your fingers, And I think this is something and how that lends to complexity that you guys have and you need to be able of exactly the usage that you guys want to send of builders into the data ecosystem. hey, you know what, I'm going and has the control there in the way that you're that you could give them a tip of where you want to go is critical. Yeah, and having an and on the last day and What's the bumper sticker for your vision, and taking the industry is continuing Awesome, Sean, thank you This is the track on
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
Emily Freeman | PERSON | 0.99+ |
Sean Knapp | PERSON | 0.99+ |
Jason Robinson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
15 hours | QUANTITY | 0.99+ |
Ascend | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
96% | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
15 hour | QUANTITY | 0.99+ |
iOS Store | TITLE | 0.99+ |
18 hours | QUANTITY | 0.99+ |
Google Play Store | TITLE | 0.99+ |
Ascend.io | ORGANIZATION | 0.99+ |
Steady | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Spark Sequel | TITLE | 0.99+ |
two | DATE | 0.98+ |
Today | DATE | 0.98+ |
a year | QUANTITY | 0.98+ |
two years | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
four | QUANTITY | 0.98+ |
Jarrah | PERSON | 0.98+ |
each | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
three years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
3 million plus | QUANTITY | 0.97+ |
4% | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
one team | QUANTITY | 0.95+ |
three use cases | QUANTITY | 0.94+ |
one person | QUANTITY | 0.93+ |
nearly 60 people | QUANTITY | 0.93+ |
one persona | QUANTITY | 0.91+ |
Andy Jassy, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Welcome back to the Cubes Live coverage of AWS reinvent 2020. It's virtual this year. We're not in person because of the pandemic. We're doing the remote Cube Cube Virtual were the Cube virtual. I'm your host, John for here with Andy Jassy, the CEO of Amazon Web services, in for his annual at the end of the show comes on the Cube. This year, it's virtual Andy. Good to see you remotely in Seattle or in Palo Alto. Uh, Dave couldn't make it in a personal conflict, but he says, Hello, great to see you. >>Great to see you as well, John. It's an annual tradition. On the last day of reinvent. I wish we were doing it in person, but I'm glad at least were able to do it. Virtually >>the good news is, I know you could arrested last night normally at reinvent you just like we're all both losing our voice at the end of the show. At least me more than you, your and we're just at the end of like okay, Relief. It happens here. It's different. It's been three weeks has been virtual. Um, you guys had a unique format this year went much better than I expected. It would go on because I was pretty skeptical about these long, um, multiple days or weeks events. You guys did a good job of timing it out and creating these activations and with key news, starting with your keynote on December 1st. Now, at the end of the three weeks, um, tell me, are you surprised by the results? Can you give us, Ah, a feeling for how you think everything went? What's what's your take So far as we close out reinvented >>Well, I think it's going really well. I mean, we always gnome or a Z get past, reinvent and you start, you know, collecting all the feedback. But we've been watching all the metrics and you know, there's trade offs. Of course, now I think all of us giving our druthers would be together in Las Vegas, and I think it's hard to replace that feeling of being with people and the excitement of learning about things together and and making decisions together after you see different sessions that you're gonna make big changes in your company and for your customer experience. And yeah, and there's a community peace. And there's, you know, this from being there. There's a concert. The answer. I think people like being with one another. But, you know, I think this was the best that any of us could imagine doing doing a virtual event. And we had to really reinvent, reinvent and all the pieces to it. And now I think that some of the positive trade offs are they. You get a lot mawr engagement than you would normally get in person So normally. Last year, with about 65,000 people in Las Vegas this year, we had 530,000 people registered to reinvent and over 300,000 participate in some fashion. All the sessions had a lot more people who are participating just because you remove the constraints of of travel in costs, and so there are trade offs. I think we prefer being together, but I think it's been a really good community event, um, in learning event for for our customers, and we've been really pleased with it so >>far. No doubt I would totally agree with you. I think a lot of people like, Hey, I love to walk the floor and discover Harry and Sarah Davis moments of finding an exhibit her and the exhibit hall or or attending a session or going to a party, bumping into friends and seeing making new friends. But I think one of the things I want to get your reaction to it. So I think this is comes up. And, you know, we've been doing a lot of Q virtual for the past year, and and everyone pretty much agrees that when we go back, it's gonna be a hybrid world in the sense of events as well as cloud. You know that. But you know, I think one of the things that I noticed this year with reinvent is it almost was a democratization of reinvent. So you really had to reinvent the format. You had 300,000 plus people attend 500 pending email addresses, but now you've got a different kind of beehive community. So you're a bar raiser thinker. It's with the culture of Amazon. So I gotta ask you do the economics does this new kind of extra epiphany impact you and how you raise the bar to keep the best of the face to face when it comes back. And then if you keep the virtual any thoughts on how to leverage this and kind of get more open, it was free. You guys made it free this year and people did show up. >>Yeah, it's a really good question, and it's probably a question will be better equipped to answer in a month or two after we kind of debrief we always do after reading that we spend. Actually, I really enjoy the meeting because the team, the Collective A. W s team, works so hard in this event. There's so many months across everything. All the product teams, um, you know, all the marketing folks, all the event folks, and I think they do a terrific job with it. And we we do about 2.5 3 hour debrief on everything we did, things that we thought was really well the things that we thought we could do better and all the feedback we get from our community and so I wouldn't be surprised if we didn't find things from what we tried this year that we incorporate into what we do when we're back to being a person again. You know, of course, none of us really know when we'll be back in person again. Re event happens to fall on the time of the year, which is early December. And so you with with a lot of people seemingly able to get vaccinated, probably by you know, they'd spring early summer. You could kind of imagine that we might be able to reinvent in person next year. We'll have to see e think we all hope we will. But I'm sure there are a number of pieces that we will take from this and incorporate into what we do in person. And you know, then it's just a matter of how far you go. >>Fingers crossed and you know it's a hybrid world for the Cube two and reinvent and clouds. Let's get into the announcement. I want to get your your take as you look back now. I mean, how many announcements is you guys have me and a lot of announcements this year. Which ones did you like? Which one did you think were jumping off the page, which ones resonated the most or had impact. Can you share kind of just some stats on e mean how many announcements launches you did this >>year? But we had about 100 50 different new services and features that we announced over the last three weeks and reinvent And there, you know the question you're asking. I could easily spend another three hours like my Kino. You know, answering you all the ones that I like thought were important. You know, I think that, you know, some of the ones I think that really stood out for people. I think first on the compute side, I just think the, um the excitement around what we're doing with chips, um, is very clear. I think what we've done with gravitas to our generalized compute to give people 40% better price performance and they could find in the latest generation X 86 processors is just It's a huge deal. If you could save 40% price performance on computer, you get a lot more done for less on. Then you know some of the chip work we're doing in machine learning with inferential on the inference chips that we built And then what? We announced the trainee, um, on the machine learning training ship. People are very excited about the chip announcements. I think also, people on the container side is people are moving to smaller and smaller units of compute. I think people were very taken with the notion of E. K s and D. C s anywhere so they can run whatever container orchestration framework they're running in A. W s also on premises. To make it easier, Thio manage their deployments and containers. I think data stores was another space where I think people realize how much more data they're dealing with today. And we gave a couple statistics and the keynote that I think are kind of astonishing that, you know, every every hour today, people are creating mawr content that there was in an entire year, 20 years ago or the people expect more data to be created. The next three years in the prior 30 years combined these air astonishing numbers and it requires a brand new reinvention of data stores. And so I think people are very excited about Block Express, which is the first sand in the cloud and there really excited about Aurora in general, but then Aurora surveillance V two that allow you to scale up to hundreds of thousands of transactions per second and saved about 90% of supervision or people very excited about that. I think machine learning. You know, uh, Sage Maker has just been a game changer and the ease with which everyday developers and data scientists can build, train, tune into play machine learning models. And so we just keep knocking out things that are hard for people. Last year we launched the first i D for Machine Learning, the stage maker studio. This year, if you look at things that we announced, like Data Wrangler, which changes you know the process of Data Prep, which is one of the most time consuming pieces in machine learning or our feature store or the first see, I see deeper machine learning with pipelines or clarify, which allow you to have explain ability in your models. Those are big deals to people who are trying to build machine learning models, and you know that I'd say probably the last thing that we hear over and over again is really just the excitement around Connect, which is our call center service, which is just growing unbelievably fast and just, you know, the the fact that it's so easy to get started and so easy to scale so much more cost effective with, you know, built from the ground up on the cloud and with machine learning and ai embedded. And then adding some of the capabilities to give agents the right information, the right time about customers and products and real time capabilities for supervisors. Throw when calls were kind of going off the rails and to be ableto thio, stop the the contact before it becomes something, it hurts. The brand is there. Those are all big deals that people have been excited about. >>I think the connecting as I want to just jump on that for a second because I think when we first met many, many years ago, star eighth reinvent. You know the trends are always the same. You guys do a great job. Slew of announcements. You keep raising the bar. But one of the things that you mentioned to me when we talked about the origination of a W S was you were doing some stuff for Amazon proper, and you had a, you know, bootstrap team and you're solving your own problems, getting some scar tissue, the affiliate thing, all these examples. The trend is you guys tend to do stuff for yourself and then re factor it into potentially opportunities for your customers. And you're working backwards. All that good stuff. We'll get into that next section. But this year, more than ever, I think with the pandemic connect, you got chime, you got workspaces. This acceleration of you guys being pretty nimble on exposing these services. I mean, connect was a call center. It's an internal thing that you guys had been using. You re factored that for customer consumption. You see that kind of china? But you're not competing with Zoom. You're offering a service toe bundle in. Is this mawr relevant? Now, as you guys get bigger with more of these services because you're still big now you're still serving yourself. What? That seems to be a big trend now, coming out of the pandemic. Can you comment on um, >>yeah, It's a good question, John. And you know we do. We do a bunch of both. Frankly, you know, there there's some services where our customers. We're trying to solve certain problems and they tell us about those problems and then we build new services for him. So you know a good example that was red shift, which is our data warehouse and service, you know, two or three very large customers of ours. When we went to spend time with them and asked them what we could do to help them further, they just said, I wish I had a data warehousing service for the cloud that was built in the AWS style way. Um and they were really fed up with what they were using. Same thing was true with relation databases where people were just fed up with the old guard commercial, great commercial, great databases of Oracle and Sequel Server. And they hated the pricing and the proprietary nature of them and the punitive licensing. And they they wanted to move to these open engines like my sequel and post dress. But to get the same performance is the commercial great databases hard? So we solve that problem with them. With Aurora, which is our fastest growing service in our history, continues to be so there's sometimes when customers articulate a need, and we don't have a service that we've been running internally. But we way listen, and we have a very strong and innovative group of builders here where we build it for customers. And then there are other cases where customers say and connect with a great example of this. Connect with an example where some of our customers like into it. And Capital One said, You know, we need something for our contact center and customer service, and people weren't very happy with what they were using in that space. And they said, You, you've had to build something just to manage your retail business last 15, 20 years Can't you find a way to generalize that expose it? And when you have enough customers tell you that there's something that they want to use that you have experienced building. You start to think about it, and it's never a simple. It's just taking that technology and exposing it because it's often built, um, internally and you do a number of things to optimize it internally. But we have a way of building services and Amazon, where we do this working backwards process that you're referring to, where We build everything with the press release and frequently asked questions document, and we imagine that we're building it to be externalized even if it's an internal feature. But our feature for our retail business, it's only gonna be used as part of some other service that you never imagine Externalizing to third party developers. We always try and build it that way, and we always try to have well documented, hardened AP eyes so that other teams can use it without having to coordinate with those teams. And so it makes it easier for us to think about Externalizing it because we're a good part of the way there and we connect we. That's what we did way generalized it way built it from the ground up on top of the cloud. And then we embedded a bunch of AI and it so that people could do a number of things that would have taken him, you know, months to do with big development teams that they could really point, click and do so. We really try to do both. >>I think that's a great example of some of the scale benefits is worth calling out because that was a consistent theme this past year, The people we've reported on interviewed that Connect really was a lifeline for many during the pandemic and way >>have 5000 different customers who started using connect during the pandemic alone. Where they, you know, overnight they had to basically deal with having a a call center remotely. And so they picked up connect and they spun up call center remotely, and they didn't really quickly. And you know, it's that along with workspaces, which are virtual desktops in the cloud and things like Chime and some of our partners, Exume have really been lifelines for people. Thio have business continuity during a tandem. >>I think there's gonna be a whole set of new services that are gonna emerge You talked about in your keynote. We talked about it prior to the event where you know, if this pandemic hit with that five years ago, when there wasn't the advancements in, say, videoconferencing, it'd be a whole different world. And I think the whole world can see on full display that having integrated video communications and other cool things is gonna have a productivity benefit. And that's kind >>of could you imagine what the world would have been like the last nine months and we didn't have competent videoconferencing. I mean, just think about how different it would have been. And I think that all of these all of these capabilities today are kind of the occult 1.5 capabilities where, by the way, thank God for them. We've we've all been able to be productive because of them. But there's so early stage, they're all going to get evolved. I'm so significantly, I mean, even just today, you know, I was spending some time with with our team thinking about when we start to come back to the office and bigger numbers. And we do meetings with our remote partners, how we think about where the center of gravity should be and who should be on video conferencing and whether they should be allowed to kind of video conference in conference rooms, which are really hard to see them. We're only on their laptops, which are easier and what technology doesn't mean that you want in the conference rooms on both sides of the table, and how do you actually have it so that people who are remote could see which side of the table. I mean, all this stuff is yet to be invented. It will be very primitive for the next couple few years, even just interrupting one another in video conferencing people. When you do it, the sound counsel cancels each other out. So people don't really cut each other off and rip on one another. Same way, like all that, all that technology is going to get involved over time. It's a tremendous >>I could just see people fighting for the mute button. You know, that's power on these meetings. You know, Chuck on our team. All kidding aside, he was excited. We talked about Enron Kelly on your team, who runs product marketing on for your app side as well as computer networking storage. We're gonna do a green room app for the Q because you know, we're doing so many remote videos. We just did 112 here for reinvent one of things that people like is this idea of kind of being ready and kind of prepped. So again, this is a use case. We never would have thought off if there wasn't a pandemic. So and I think these are the kinds of innovation, thinking that seems small but works well when you start thinking about how easy it could be to say to integrate a chime through this sdk So this is the kind of things, that kind thing. So so with that, I want to get into your leadership principles because, you know, if you're a startup or a big company trying to reinvent, you're looking at the eight leadership principles you laid out, which were, um don't be afraid to reinvent. Acknowledge you can't fight gravity. Talent is hungry to reinvent solving real customer problems. Speed don't complex. If I use the platform with the broader set of tools, which is more a plug for you guys on cloud pull everything together with top down goals. Okay, great. How >>do you >>take those leadership principles and apply them broadly to companies and start ups? Because I think start ups in the garage are also gonna be there going. I'm going to jump on this wave. I'm inspired by the sea change. I'm gonna build something new or an enterprise. I'm gonna I'm gonna innovate. How do you How do you see these eight principles translating? >>Well, I think they're applicable to every company of every size and every industry and organization. Frankly, also, public sector organizations. I think in many ways startups have an advantage. And, you know, these were really keys to how to build a reinvention culture. And startups have an advantage because just by their very nature, they are inventive. You know, you can't you can't start a company that's a direct copy of somebody else that is an inventive where you have no chance. So startups already have, you know, a group of people that feel insurgent, and they wanted their passionate about certain customer experience. They want to invent it, and they know that they they only have so much time. Thio build something before money runs out and you know they have a number of those built in advantages. But I think larger companies are often where you see struggles and building a reinvention and invention culture and I've probably had in the last three weeks is part of reinvent probably about 40 different customer meetings with, you know, probably 75 different companies were accomplished in those or so and and I think that I met with a lot of leaders of companies where I think these reinvention principles really resonated, and I think they're they're battling with them and, you know, I think that it starts with the leaders if you, you know, when you have big companies that have been doing things a certain way for a long period of time, there's a fair bit of inertia that sets in and a lot of times not ill intended. It's just a big group of people in the middle who've been doing things a certain way for a long time and aren't that keen to change sometimes because it means ripping up something that they that they built and they remember how hard they worked on it. And sometimes it's because they don't know what it means for themselves. And you know, it takes the leadership team deciding that we are going to change. And usually that means they have to be able to have access to what's really happening in their business, what's really happening in their products in the market. But what customers really think of it and what they need to change and then having the courage and the energy, frankly, to pick the company up and push him to change because you're gonna have to fight a lot of inertia. So it always starts with the leaders. And in addition to having access that truth and deciding to make the change, you've gotta also set aggressive top down goal. The force of the organization moved faster than otherwise would and that also, sometimes leaders decide they're gonna want to change and they say they're going to change and they don't really set the goal. And they were kind of lessons and kind of doesn't listen. You know, we have a term the principal we have inside Amazon when we talk about the difference between good intentions and mechanisms and good intentions is saying we need to change and we need to invent, reinvent who we are and everyone has the right intentions. But nothing happens. Ah, mechanism, as opposed to good intention, is saying like Capital One did. We're going to reinvent our consumer digital banking platform in the next 18 months, and we're gonna meet every couple of weeks to see where we are into problem solved, like that's a mechanism. It's much harder to escape getting that done. Then somebody just saying we're going to reinvent, not checking on it, you know? And so, you know, I think that starts with the leaders. And then I think that you gotta have the right talent. You gotta have people who are excited about inventing, as opposed to really, Justin, what they built over a number of years, and yet at the same time, you're gonna make sure you don't hire people who were just building things that they're interested in. They went where they think the tech is cool as opposed to what customers want. And then I think you've got to Really You gotta build speed into your culture. And I think in some ways this is the very biggest challenge for a lot of enterprises. And I just I speak to so many leaders who kind of resigned themselves to moving slowly because they say you don't understand my like, companies big and the culture just move slow with regulator. There are a lot of reasons people will give you on why they have to move slow. But, you know, moving with speed is a choice. It's not something that your preordained with or not it is absolutely a leadership choice. And it can't happen overnight. You can't flip a switch and make it happen, but you can build a bunch of things into your culture first, starting with people. Understand that you are gonna move fast and then building an opportunity for people. Experiment quickly and reward people who experiment and to figure out the difference between one way doors and two way doors and things that are too way doors, letting people move quick and try things. You have to build that muscle or when it really comes, time to reinvent you won't have. >>That's a great point in the muscle on that's that's critical. You know, one of things I want to bring up. You brought on your keynote and you talk to me privately about it is you gave attribute in a way to Clay Christensen, who you called out on your keynote. Who was a professor at Harvard. Um, and he was you impressed by him and and you quoted him and he was He was your professor there, Um, your competitive person and you know, companies have strategy departments, and competitive strategy is not necessarily departments of mindset, and you were kind of brought this out in a zone undertone in your talk, we're saying you've got to be competitive in the sense of you got to survive and you've got to thrive. And you're kind of talking about rebuilding and building and, you know, Clay Christians. Innovative dilemma. Famous book is a mother, mother teachings around metrics and strategy and prescriptions. If he were alive today and he was with us, what would he be talking about? Because, you know, you have kind of stuck in the middle. Strategy was not Clay Christensen thing, but, you know, companies have to decide who they are. Their first principles face the truth. Some of the things you mentioned, what would we be talking with him about if we were talking about the innovator's dilemma with respect to, say, cloud and and some of the key decisions that have to be made right now? >>Well, then, Clay Christensen on it. Sounds like you read some of these books on. Guy had the fortunate, um, you know, being able to sit in classes that he taught. And also I got a chance. Thio, meet with him a couple of times after I graduated. Um, school, you know, kind of as more of a professional sorts. You can call me that. And, uh, he he was so thoughtful. He wasn't just thoughtful about innovation. He was thoughtful about how to get product market fit. And he was thoughtful about what your priorities in life were and how to build families. And, I mean, he really was one of the most thoughtful, innovative, um, you know, forward thinking, uh, strategist, I had the opportunity Thio encounter and that I've read, and so I'm very appreciative of having the opportunity Thio learn from him. And a lot of I mean, I think that he would probably be continuing to talk about a lot of the principles which I happen to think are evergreen that he he taught and there's it relates to the cloud. I think that one of the things that quite talked all the time about in all kinds of industries is that disruption always happens at the low end. It always happens with products that seem like they're not sophisticated enough. Don't do enough. And people always pooh pooh them because they say they won't do these things. And we learned this. I mean, I watched in the beginning of it of us. When we lost just three, we had so many people try and compare it Thio things like e m. C. And of course, it was very different than EMC. Um, but it was much simpler, but And it and it did a certain set of activities incredibly well at 1 1/100 of the price that's disrupted, you know, like 1 1/100 of the price. You find that builders, um, find a lot of utility for products like that. And so, you know, I think that it always starts with simple needs and products that aren't fully developed. That overtime continue to move their way up. Thio addressing Maura, Maura the market. And that's what we did with is what we've done with all our services. That's three and easy to and party ass and roar and things like that. And I think that there are lots of lessons is still apply. I think if you look at, um, containers and how that's changing what compute looks like, I think if you look at event driven, serverless compute in Lambda. Lambda is a great example of of really ah, derivative plays teaching, which is we knew when we were building Lambda that as people became excited about that programming model it would cannibalize easy to in our core compute service. And there are a lot of companies that won't do that. And for us we were trying to build a business that outlasts all of us. And that's you know, it's successful over a long period of time, and the the best way I know to do that is to listen to what customers We're trying to solve an event on their behalf, even if it means in the short term you may cannibalize yourself. And so that's what we always think about is, you know, wherever we see an opportunity to provide a better customer experience, even if it means in the short term, make cannibalism revenue leg lambda with complete with easy to our over our surveillance with provisions or are we're going to do it because we're gonna take the long view, and we believe that we serve customers well over a long period of time. We have a chance to do >>that. It's a cannibalize yourself and have someone else do it to you, right? That's that's the philosophy. Alright, fine. I know you've got tight for time. We got a you got a hard stop, But let's talk about the vaccine because you know, you brought up in the keynote carrier was a featured thing. And look at the news headlines. Now you got the shots being administered. You're starting to see, um, hashtag going around. I got my shot. So, you know, there's a There's a really Momenta. Mit's an uplifting vibe here. Amazon's involved in this and you talked about it. Can you share the innovation? There can just give us an update and what's come out of that and this supply chain factor. The cold chain. You guys were pretty instrumental in that share your your thoughts. >>We've been really excited and privileged partner with companies who are really trying to change what's possible for all of us. And I think you know it started with some of the companies producing vaccines. If you look at what we do with Moderna, where they built their digital manufacturing sweet on top of us in supply chain, where they used us for computing, storage and data warehousing and machine learning, and and on top of AWS they built, they're Cove in 19 vaccine candidate in 42 days when it normally takes 20 months. I mean, that is a total game changer. It's a game changer for all of us and getting the vaccine faster. But also, you just think about what that means for healthcare moving forward, it zits very exciting. And, yeah, I love what carriers doing. Kariya is building this product on top of AWS called links, which is giving them end and visibility over the transportation and in temperature of of the culture and everything they're delivering. And so it, uh, it changes what happens not only for food, ways and spoilage, but if you think about how much of the vaccine they're gonna actually transport to people and where several these vaccines need the right temperature control, it's it's a big deal. And what you know, I think there are a great example to what carrier is where. You know, if you think about the theme of this ring and then I talked about in my keynote, if you want to survive as an organization over a long period of time, you're gonna have to reinvent yourself. You're gonna have to probably do it. Multiple times over and the key to reinventing his first building, the right reinvention culture. And we talk about some of those principles earlier, but you also have to be aware of the technology that's available that allows you to do that. If you look at Carrier, they have built a very, very strong reinvention culture. And then, if you look at how they're leveraging, compute and storage and I o. T at the edge and machine learning, they know what's available, and they're using that technology to reinvent what's what's possible, and we're gonna all benefit because of >>it. All right. Well, Andy, you guys were reinventing the virtual space. Three weeks, it went off. Well, congratulations. Great to go along for the ride with the cube virtual. And again. Thank you for, um, keeping the show alive over there. Reinvent. Um, thanks for your team to for including the Cube. We really appreciate the Cube virtual being involved. Thank you. >>It's my pleasure. And thanks for having me, John and, uh, look forward to seeing you soon. >>All right? Take care. Have a hockey game in real life. When? When we get back, Andy Jesse, the CEO of a W s here to really wrap up. Reinvent here for Cuba, Virtual as well as the show. Today is the last day of the program. It will be online for the rest of the year and then into next month there's another wave coming, of course. Check out all the coverage. Come, come back, It's It's It's online. It's all free Cube Cube stuff is there on the Cube Channel. Silicon angle dot com For all the top stories, cube dot net tons of content on Twitter. Hashtag reinvent. You'll see all the commentary. Thanks for watching the Cube Virtual. I'm John Feehery.
SUMMARY :
Good to see you remotely Great to see you as well, John. the good news is, I know you could arrested last night normally at reinvent you just like we're all both losing And there's, you know, this from being there. And then if you keep the virtual any thoughts on how All the product teams, um, you know, all the marketing folks, all the event folks, I mean, how many announcements is you guys have and the keynote that I think are kind of astonishing that, you know, every every hour more than ever, I think with the pandemic connect, you got chime, you got workspaces. could do a number of things that would have taken him, you know, months to do with big development teams that And you know, it's that along with workspaces, which are virtual desktops in the cloud and to the event where you know, if this pandemic hit with that five years ago, when there wasn't the advancements of the table, and how do you actually have it so that people who are remote could see which side of the table. We're gonna do a green room app for the Q because you know, we're doing so many remote videos. How do you How do you see these eight principles And then I think that you gotta have the right talent. Some of the things you mentioned, what would we be talking with him about if we were talking about the Guy had the fortunate, um, you know, being able to sit in classes that he taught. We got a you got a hard stop, But let's talk about the vaccine because you know, And I think you know it started with some of the Well, Andy, you guys were reinventing the virtual space. And thanks for having me, John and, uh, look forward to seeing you soon. the CEO of a W s here to really wrap up.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
John Feehery | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
December 1st | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
20 months | QUANTITY | 0.99+ |
Sarah Davis | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Maura | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Harry | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
Thio | PERSON | 0.99+ |
42 days | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
today | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Lambda | TITLE | 0.99+ |
530,000 people | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
Moderna | ORGANIZATION | 0.99+ |
three weeks | QUANTITY | 0.99+ |
three hours | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
early December | DATE | 0.99+ |
this year | DATE | 0.99+ |
5000 different customers | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
over 300,000 | QUANTITY | 0.99+ |
next month | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Sequel | ORGANIZATION | 0.98+ |
china | LOCATION | 0.98+ |
both sides | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
five years ago | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
first building | QUANTITY | 0.98+ |
Three weeks | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Harvard | ORGANIZATION | 0.98+ |
Cube virtual | COMMERCIAL_ITEM | 0.98+ |
last night | DATE | 0.97+ |
two way | QUANTITY | 0.97+ |
about 65,000 people | QUANTITY | 0.97+ |
Chime | ORGANIZATION | 0.97+ |
Rahul Pathak, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the cubes. Ongoing coverage of AWS reinvent virtual Cuba's Gone Virtual along with most events these days are all events and continues to bring our digital coverage of reinvent With me is Rahul Pathak, who is the vice president of analytics at AWS A Ro. It's great to see you again. Welcome. And thanks for joining the program. >>They have Great co two and always a pleasure. Thanks for having me on. >>You're very welcome. Before we get into your leadership discussion, I want to talk about some of the things that AWS has announced. Uh, in the early parts of reinvent, I want to start with a glue elastic views. Very notable announcement allowing people to, you know, essentially share data across different data stores. Maybe tell us a little bit more about glue. Elastic view is kind of where the name came from and what the implication is, >>Uh, sure. So, yeah, we're really excited about blue elastic views and, you know, as you mentioned, the idea is to make it easy for customers to combine and use data from a variety of different sources and pull them together into one or many targets. And the reason for it is that you know we're really seeing customers adopt what we're calling a lake house architectural, which is, uh, at its core Data Lake for making sense of data and integrating it across different silos, uh, typically integrated with the data warehouse, and not just that, but also a range of other purpose. Both stores like Aurora, Relation of Workloads or dynamodb for non relational ones. And while customers typically get a lot of benefit from using purpose built stores because you get the best possible functionality, performance and scale forgiven use case, you often want to combine data across them to get a holistic view of what's happening in your business or with your customers. And before glue elastic views, customers would have to either use E. T. L or data integration software, or they have to write custom code that could be complex to manage, and I could be are prone and tough to change. And so, with elastic views, you can now use sequel to define a view across multiple data sources pick one or many targets. And then the system will actually monitor the sources for changes and propagate them into the targets in near real time. And it manages the anti pipeline and can notify operators if if anything, changes. And so the you know the components of the name are pretty straightforward. Blues are survivalists E T Elling data integration service on blue elastic views about our about data integration their views because you could define these virtual tables using sequel and then elastic because it's several lists and will scale up and down to deal with the propagation of changes. So we're really excited about it, and customers are as well. >>Okay, great. So my understanding is I'm gonna be able to take what's called what the parlance of materialized views, which in my laypersons terms assumes I'm gonna run a query on the database and take that subset. And then I'm gonna be ableto thio. Copy that and move it to another data store. And then you're gonna automatically keep track of the changes and keep everything up to date. Is that right? >>Yes. That's exactly right. So you can imagine. So you had a product catalog for example, that's being updated in dynamodb, and you can create a view that will move that to Amazon Elasticsearch service. You could search through a current version of your catalog, and we will monitor your dynamodb tables for any changes and make sure those air all propagated in the real time. And all of that is is taken care of for our customers as soon as they defined the view on. But they don't be just kept in sync a za long as the views in effect. >>Let's see, this is being really valuable for a person who's building Looks like I like to think in terms of data services or data products that are gonna help me, you know, monetize my business. Maybe, you know, maybe it's a simple as a dashboard, but maybe it's actually a product. You know, it might be some content that I want to develop, and I've got transaction systems. I've got unstructured data, may be in a no sequel database, and I wanna actually combine those build new products, and I want to do that quickly. So So take me through what I would have to do. You you sort of alluded to it with, you know, a lot of e t l and but take me through in a little bit more detail how I would do that, you know, before this innovation. And maybe you could give us a sense as to what the possibilities are with glue. Elastic views? >>Sure. So, you know, before we announced elastic views, a customer would typically have toe think about using a T l software, so they'd have to write a neat L pipeline that would extract data periodically from a range of sources. They then have to write transformation code that would do things like matchup types. Make sure you didn't have any invalid values, and then you would combine it on periodically, Write that into a target. And so once you've got that pipeline set up, you've got to monitor it. If you see an unusual spike in data volume, you might have to add more. Resource is to the pipeline to make a complete on time. And then, if anything changed in either the source of the destination that prevented that data from flowing in the way you would expect it, you'd have toe manually, figure that out and have data, quality checks and all of that in place to make sure everything kept working but with elastic views just gets much simpler. So instead of having to write custom transformation code, you right view using sequel and um, sequel is, uh, you know, widely popular with data analysts and folks that work with data, as you well know. And so you can define that view and sequel. The view will look across multiple sources, and then you pick your destination and then glue. Elastic views essentially monitors both the source for changes as well as the source and the destination for any any issues like, for example, did the schema changed. The shape of the data change is something briefly unavailable, and it can monitor. All of that can handle any errors, but it can recover from automatically. Or if it can't say someone dropped an important table in the source. That was part of your view. You can actually get alerted and notified to take some action to prevent bad data from getting through your system or to prevent your pipeline from breaking without your knowledge and then the final pieces, the elasticity of it. It will automatically deal with adding more resource is if, for example, say you had a spiky day, Um, in the markets, maybe you're building a financial services application and you needed to add more resource is to process those changes into your targets more quickly. The system would handle that for you. And then, if you're monetizing data services on the back end, you've got a range of options for folks subscribing to those targets. So we've got capabilities like our, uh, Amazon data exchange, where people can exchange and monetize data set. So it allows this and to end flow in a much more straightforward way. It was possible before >>awesome. So a lot of automation, especially if something goes wrong. So something goes wrong. You can automatically recover. And if for whatever reason, you can't what happens? You quite ask the system and and let the operator No. Hey, there's an issue. You gotta go fix it. How does that work? >>Yes, exactly. Right. So if we can recover, say, for example, you can you know that for a short period of time, you can't read the target database. The system will keep trying until it can get through. But say someone dropped a column from your source. That was a key part of your ultimate view and destination. You just can't proceed at that point. So the pipeline stops and then we notify using a PS or an SMS alert eso that programmatic action can be taken. So this effectively provides a really great way to enforce the integrity of data that's going between the sources and the targets. >>All right, make it kindergarten proof of it. So let's talk about another innovation. You guys announced quicksight que, uh, kind of speaking to the machine in my natural language, but but give us some more detail there. What is quicksight Q and and how doe I interact with it. What What kind of questions can I ask it >>so quick? Like you is essentially a deep, learning based semantic model of your data that allows you to ask natural language questions in your dashboard so you'll get a search bar in your quick side dashboard and quick site is our service B I service. That makes it really easy to provide rich dashboards. Whoever needs them in the organization on what Q does is it's automatically developing relationships between the entities in your data, and it's able to actually reason about the questions you ask. So unlike earlier natural language systems, where you have to pre define your models, you have to pre define all the calculations that you might ask the system to do on your behalf. Q can actually figure it out. So you can say Show me the top five categories for sales in California and it'll look in your data and figure out what that is and will prevent. It will present you with how it parse that question, and there will, in line in seconds, pop up a dashboard of what you asked and actually automatically try and take a chart or visualization for that data. That makes sense, and you could then start to refine it further and say, How does this compare to what happened in New York? And we'll be able to figure out that you're tryingto overlay those two data sets and it'll add them. And unlike other systems, it doesn't need to have all of those things pre defined. It's able to reason about it because it's building a model of what your data means on the flight and we pre trained it across a variety of different domains So you can ask a question about sales or HR or any of that on another great part accused that when it presents to you what it's parsed, you're actually able toe correct it if it needs it and provide feedback to the system. So, for example, if it got something slightly off you could actually select from a drop down and then it will remember your selection for the next time on it will get better as you use it. >>I saw a demo on in Swamis Keynote on December 8. That was basically you were able to ask Quick psych you the same question, but in different ways, you know, like compare California in New York or and then the data comes up or give me the top, you know, five. And then the California, New York, the same exact data. So so is that how I kind of can can check and see if the answer that I'm getting back is correct is ask different questions. I don't have to know. The schema is what you're saying. I have to have knowledge of that is the user I can. I can triangulate from different angles and then look and see if that's correct. Is that is that how you verify or there are other ways? >>Eso That's one way to verify. You could definitely ask the same question a couple of different ways and ensure you're seeing the same results. I think the third option would be toe, uh, you know, potentially click and drill and filter down into that data through the dash one on, then the you know, the other step would be at data ingestion Time. Typically, data pipelines will have some quality controls, but when you're interacting with Q, I think the ability to ask the question multiple ways and make sure that you're getting the same result is a perfectly reasonable way to validate. >>You know what I like about that answer that you just gave, and I wonder if I could get your opinion on this because you're you've been in this business for a while? You work with a lot of customers is if you think about our operational systems, you know things like sales or E r. P systems. We've contextualized them. In other words, the business lines have inject context into the system. I mean, they kind of own it, if you will. They own the data when I put in quotes, but they do. They feel like they're responsible for it. There's not this constant argument because it's their data. It seems to me that if you look back in the last 10 years, ah, lot of the the data architecture has been sort of generis ized. In other words, the experts. Whether it's the data engineer, the quality engineer, they don't really have the business context. But the example that you just gave it the drill down to verify that the answer is correct. It seems to me, just in listening again to Swamis Keynote the other day is that you're really trying to put data in the hands of business users who have the context on the domain knowledge. And that seems to me to be a change in mindset that we're gonna see evolve over the next decade. I wonder if you could give me your thoughts on that change in the data architecture data mindset. >>David, I think you're absolutely right. I mean, we see this across all the customers that we speak with there's there's an increasing desire to get data broadly distributed into the hands of the organization in a well governed and controlled way. But customers want to give data to the folks that know what it means and know how they can take action on it to do something for the business, whether that's finding a new opportunity or looking for efficiencies. And I think, you know, we're seeing that increasingly, especially given the unpredictability that we've all gone through in 2020 customers are realizing that they need to get a lot more agile, and they need to get a lot more data about their business, their customers, because you've got to find ways to adapt quickly. And you know, that's not gonna change anytime in the future. >>And I've said many times in the The Cube, you know, there are industry. The technology industry used to be all about the products, and in the last decade it was really platforms, whether it's SAS platforms or AWS cloud platforms, and it seems like innovation in the coming years, in many respects is coming is gonna come from the ecosystem and the ability toe share data we've We've had some examples today and then But you hit on. You know, one of the key challenges, of course, is security and governance. And can you automate that if you will and protect? You know the users from doing things that you know, whether it's data access of corporate edicts for governance and compliance. How are you handling that challenge? >>That's a great question, and it's something that really emphasized in my leadership session. But the you know, the notion of what customers are doing and what we're seeing is that there's, uh, the Lake House architectural concept. So you've got a day late. Purpose build stores and customers are looking for easy data movement across those. And so we have things like blue elastic views or some of the other blue features we announced. But they're also looking for unified governance, and that's why we built it ws late formation. And the idea here is that it can quickly discover and catalog customer data assets and then allows customers to define granular access policies centrally around that data. And once you have defined that, it then sets customers free to give broader access to the data because they put the guardrails in place. They put the protections in place. So you know you can tag columns as being private so nobody can see them on gun were announced. We announced a couple of new capabilities where you can provide row based control. So only a certain set of users can see certain rose in the data, whereas a different set of users might only be able to see, you know, a different step. And so, by creating this fine grained but unified governance model, this actually sets customers free to give broader access to the data because they know that they're policies and compliance requirements are being met on it gets them out of the way of the analyst. For someone who can actually use the data to drive some value for the business, >>right? They could really focus on driving value. And I always talk about monetization. However monetization could be, you know, a generic term, for it could be saving lives, admission of the business or the or the organization I meant to ask you about acute customers in bed. Uh, looks like you into their own APs. >>Yes, absolutely so one of quick sites key strengths is its embed ability. And on then it's also serverless, so you could embed it at a really massive scale. And so we see customers, for example, like blackboard that's embedding quick side dashboards into information. It's providing the thousands of educators to provide data on the effectiveness of online learning. For example, on you could embed Q into that capability. So it's a really cool way to give a broad set of people the ability to ask questions of data without requiring them to be fluent in things like Sequel. >>If I ask you a question, we've talked a little bit about data movement. I think last year reinvent you guys announced our A three. I think it made general availability this year. And remember Andy speaking about it, talking about you know, the importance of having big enough pipes when you're moving, you know, data around. Of course you do. Doing tearing. You also announced Aqua Advanced Query accelerator, which kind of reduces bringing the computer. The data, I guess, is how I would think about that reducing that movement. But then we're talking about, you know, glue, elastic views you're copying and moving data. How are you ensuring you know, maintaining that that maximum performance for your customers. I mean, I know it's an architectural question, but as an analytics professional, you have toe be comfortable that that infrastructure is there. So how does what's A. W s general philosophy in that regard? >>So there's a few ways that we think about this, and you're absolutely right. I think there's data volumes were going up, and we're seeing customers going from terabytes, two petabytes and even people heading into the exabyte range. Uh, there's really a need to deliver performance at scale. And you know, the reality of customer architectures is that customers will use purpose built systems for different best in class use cases. And, you know, if you're trying to do a one size fits all thing, you're inevitably going to end up compromising somewhere. And so the reality is, is that customers will have more data. We're gonna want to get it to more people on. They're gonna want their analytics to be fast and cost effective. And so we look at strategies to enable all of this. So, for example, glue elastic views. It's about moving data, but it's about moving data efficiently. So What we do is we allow customers to define a view that represents the subset of their data they care about, and then we only look to move changes as efficiently as possible. So you're reducing the amount of data that needs to get moved and making sure it's focused on the essential. Similarly, with Aqua, what we've done, as you mentioned, is we've taken the compute down to the storage layer, and we're using our nitro chips to help with things like compression and encryption. And then we have F. P. J s in line to allow filtering an aggregation operation. So again, you're tryingto quickly and effectively get through as much data as you can so that you're only sending back what's relevant to the query that's being processed. And that again leads to more performance. If you can avoid reading a bite, you're going to speed up your queries. And that Awkward is trying to do. It's trying to push those operations down so that you're really reducing data as close to its origin as possible on focusing on what's essential. And that's what we're applying across our analytics portfolio. I would say one other piece we're focused on with performance is really about innovating across the stack. So you mentioned network performance. You know, we've got 100 gigabits per second throughout now, with the next 10 instances and then with things like Grab it on to your able to drive better price performance for customers, for general purpose workloads. So it's really innovating at all layers. >>It's amazing to watch it. I mean, you guys, it's a It's an incredible engineering challenge as you built this hyper distributed system. That's now, of course, going to the edge. I wanna come back to something you mentioned on do wanna hit on your leadership session as well. But you mentioned the one size fits all, uh, system. And I've asked Andy Jassy about this. I've had a discussion with many folks that because you're full and and of course, you mentioned the challenges you're gonna have to make tradeoffs if it's one size fits all. The flip side of that is okay. It's simple is you know, 11 of the Swiss Army knife of database, for example. But your philosophy is Amazon is you wanna have fine grained access and to the primitives in case the market changes you, you wanna be able to move quickly. So that puts more pressure on you to then simplify. You're not gonna build this big hairball abstraction layer. That's not what he gonna dio. Uh, you know, I think about, you know, layers and layers of paint. I live in a very old house. Eso your That's not your approach. So it puts greater pressure on on you to constantly listen to your customers, and and they're always saying, Hey, I want to simplify, simplify, simplify. We certainly again heard that in swamis presentation the other day, all about, you know, minimizing complexity. So that really is your trade office. It puts pressure on Amazon Engineering to continue to raise the bar on simplification. Isn't Is that a fair statement? >>Yeah, I think so. I mean, you know, I think any time we can do work, so our customers don't have to. I think that's a win for both of us. Um, you know, because I think we're delivering more value, and it makes it easier for our customers to get value from their data way. Absolutely believe in using the right tool for the right job. And you know you talked about an old house. You're not gonna build or renovate a house of the Swiss Army knife. It's just the wrong tool. It might work for small projects, but you're going to need something more specialized. The handle things that matter. It's and that is, uh, that's really what we see with that, you know, with that set of capabilities. So we want to provide customers with the best of both worlds. We want to give them purpose built tools so they don't have to compromise on performance or scale of functionality. And then we want to make it easy to use these together. Whether it's about data movement or things like Federated Queries, you can reach into each of them and through a single query and through a unified governance model. So it's all about stitching those together. >>Yeah, so far you've been on the right side of history. I think it serves you well on your customers. Well, I wanna come back to your leadership discussion, your your leadership session. What else could you tell us about? You know, what you covered there? >>So we we've actually had a bunch of innovations on the analytics tax. So some of the highlights are in m r, which is our managed spark. And to do service, we've been able to achieve 1.7 x better performance and open source with our spark runtime. So we've invested heavily in performance on now. EMR is also available for customers who are running and containerized environment. So we announced you Marnie chaos on then eh an integrated development environment and studio for you Marco D M R studio. So making it easier both for people at the infrastructure layer to run em are on their eks environments and make it available within their organizations but also simplifying life for data analysts and folks working with data so they can operate in that studio and not have toe mess with the details of the clusters underneath and then a bunch of innovation in red shift. We talked about Aqua already, but then we also announced data sharing for red Shift. So this makes it easy for red shift clusters to share data with other clusters without putting any load on the central producer cluster. And this also speaks to the theme of simplifying getting data from point A to point B so you could have central producer environments publishing data, which represents the source of truth, say into other departments within the organization or departments. And they can query the data, use it. It's always up to date, but it doesn't put any load on the producers that enables these really powerful data sharing on downstream data monetization capabilities like you've mentioned. In addition, like Swami mentioned in his keynote Red Shift ML, so you can now essentially train and run models that were built in sage maker and optimized from within your red shift clusters. And then we've also automated all of the performance tuning that's possible in red ships. So we really invested heavily in price performance, and now we've automated all of the things that make Red Shift the best in class data warehouse service from a price performance perspective up to three X better than others. But customers can just set red shift auto, and it'll handle workload management, data compression and data distribution. Eso making it easier to access all about performance and then the other big one was in Lake Formacion. We announced three new capabilities. One is transactions, so enabling consistent acid transactions on data lakes so you can do things like inserts and updates and deletes. We announced row based filtering for fine grained access control and that unified governance model and then automated storage optimization for Data Lake. So customers are dealing with an optimized small files that air coming off streaming systems, for example, like Formacion can auto compact those under the covers, and you can get a 78 x performance boost. It's been a busy year for prime lyrics. >>I'll say that, z that it no great great job, bro. Thanks so much for coming back in the Cube and, you know, sharing the innovations and, uh, great to see you again. And good luck in the coming here. Well, >>thank you very much. Great to be here. Great to see you. And hope we get Thio see each other in person against >>I hope so. All right. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after this short break
SUMMARY :
It's great to see you again. They have Great co two and always a pleasure. to, you know, essentially share data across different And so the you know the components of the name are pretty straightforward. And then you're gonna automatically keep track of the changes and keep everything up to date. So you can imagine. services or data products that are gonna help me, you know, monetize my business. that prevented that data from flowing in the way you would expect it, you'd have toe manually, And if for whatever reason, you can't what happens? So if we can recover, say, for example, you can you know that for a So let's talk about another innovation. that you might ask the system to do on your behalf. but in different ways, you know, like compare California in New York or and then the data comes then the you know, the other step would be at data ingestion Time. But the example that you just gave it the drill down to verify that the answer is correct. And I think, you know, we're seeing that increasingly, You know the users from doing things that you know, whether it's data access But the you know, the notion of what customers are doing and what we're seeing is that admission of the business or the or the organization I meant to ask you about acute customers And on then it's also serverless, so you could embed it at a really massive But then we're talking about, you know, glue, elastic views you're copying and moving And you know, the reality of customer architectures is that customers will use purpose built So that puts more pressure on you to then really what we see with that, you know, with that set of capabilities. I think it serves you well on your customers. speaks to the theme of simplifying getting data from point A to point B so you could have central in the Cube and, you know, sharing the innovations and, uh, great to see you again. thank you very much. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December 8 | DATE | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
third option | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
A. W | PERSON | 0.99+ |
this year | DATE | 0.99+ |
10 instances | QUANTITY | 0.98+ |
A three | COMMERCIAL_ITEM | 0.98+ |
78 x | QUANTITY | 0.98+ |
two petabytes | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
Amazon Engineering | ORGANIZATION | 0.97+ |
Red Shift ML | TITLE | 0.97+ |
Formacion | ORGANIZATION | 0.97+ |
11 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
five categories | QUANTITY | 0.94+ |
Aqua | ORGANIZATION | 0.93+ |
Elasticsearch | TITLE | 0.93+ |
terabytes | QUANTITY | 0.93+ |
both worlds | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
two data sets | QUANTITY | 0.91+ |
Lake Formacion | ORGANIZATION | 0.9+ |
single query | QUANTITY | 0.9+ |
Data Lake | ORGANIZATION | 0.89+ |
thousands of educators | QUANTITY | 0.89+ |
Both stores | QUANTITY | 0.88+ |
Thio | PERSON | 0.88+ |
agile | TITLE | 0.88+ |
Cuba | LOCATION | 0.87+ |
dynamodb | ORGANIZATION | 0.86+ |
1.7 x | QUANTITY | 0.86+ |
Swamis | PERSON | 0.84+ |
EMR | TITLE | 0.82+ |
one size | QUANTITY | 0.82+ |
Red Shift | TITLE | 0.82+ |
up to three X | QUANTITY | 0.82+ |
100 gigabits per second | QUANTITY | 0.82+ |
Marnie | PERSON | 0.79+ |
last decade | DATE | 0.79+ |
reinvent 2020 | EVENT | 0.74+ |
Invent | EVENT | 0.74+ |
last 10 years | DATE | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
today | DATE | 0.74+ |
A Ro | EVENT | 0.71+ |
three new capabilities | QUANTITY | 0.71+ |
two | QUANTITY | 0.7+ |
E T Elling | PERSON | 0.69+ |
Eso | ORGANIZATION | 0.66+ |
Aqua | TITLE | 0.64+ |
Cube | ORGANIZATION | 0.63+ |
Query | COMMERCIAL_ITEM | 0.63+ |
SAS | ORGANIZATION | 0.62+ |
Aurora | ORGANIZATION | 0.61+ |
Lake House | ORGANIZATION | 0.6+ |
Sequel | TITLE | 0.58+ |
P. | PERSON | 0.56+ |
Day 1 Keynote Analysis | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Everyone welcome to the cubes Live coverage of AWS reinvent 2020 virtual were virtual this year We are the Cube Virtual I'm your host John for a joint day Volonte for keynote analysis Andy Jassy just delivered his live keynote. This is our live keynote analysis. Dave. Great to see you, Andy Jassy again. You know their eight year covering reinvent their ninth year. We're virtual. We're not in person. We're doing it. >>Great to see you, John. Even though we're 3000 miles apart, we both have the covert here. Do going Happy birthday, my friend. >>Thank you. Congratulations. Five years ago I was 50 and they had the cake on stage and on the floor. There's no floor, this year's virtual and I think one of the things that came out of Andy Jessie's keynote, obviously, you know, I met with him earlier. Telegraph some of these these moves was one thing that surprised me. He came right out of the gate. He acknowledged that social change, the cultural shift. Um, that was interesting but he went in and did his normal end to end. Slew of announcements, big themes around pivoting. And he brought kind of this business school kind of leadership vibe to the table early talking about what people are experiencing companies like ourselves and others around the change and cultural change around companies and leadership. It takes for the cloud. And this was a big theme of reinvent, literally like, Hey, don't hold on to the old And I kept thinking to myself, David, you and I both are Historians of the tech industry remind me of when I was young, breaking into the business, the mainframe guys and gals, they were hugging onto those mainframes as long as they could, and I looked at it like That's not gonna be around much longer. And they kept No, it's gonna be around. This is this is the state of the art, and then the extinction. Instantly this feels like cloud moment, where it's like it's the wake up call. Hey, everyone doing it the old way. You're done. This is it. But you know, this is a big theme. >>Yes. So, I mean, how do you curate 2.5 3 hours of Andy Jassy. So I tried to break it down at the three things in addition to what you just mentioned about him acknowledging the social unrest and and the inequalities, particularly with black people. Uh, but so I had market leadership. And there's some nuance there that if we have time, I'd love to talk about, uh, the feature innovation. I mean, that was the bulk of his presentation, and I was very pleased. I wrote a piece this weekend. As you know, talk about Cloud 2030 and my main focus was the last 10 years about I t transformation the next 10 years. They're gonna be about organizational and business and industry transformation. I saw a lot of that in jazz ces keynote. So you know, where do you wanna go? We've only got a few minutes here, John, >>but let's break. Let's break down the high level theme before we get into the announcement. The thematic part was, it's about reinventing 2020. The digital transformation is being forced upon us. Either you're in the cloud or you're not in the cloud. Either way, you got to get to the cloud for to survive in this post covert error. Um, you heard a lot about redefining compute new chips, custom chips. They announced the deal with Intel, but then he's like we're better and faster on our custom side. That was kind of a key thing, this high idea of computing, I think that comes into play with edge and hybrid. The other thing that was notable was Jessie's almost announcement of redefining hybrid. There's no product announcement, but he was essentially announcing. Hybrid is changed, and he was leaning forward with his definition of redefining what hybrid cloud is. And I think that to me was the biggest, um, signal. And then finally, what got my attention was the absolute overt call out of Microsoft and Oracle, and, you know, suddenly, behind the scenes on the database shift we've been saying for multiple times. Multiple databases in the cloud he laid that out, said there will be no one thing to rule anything. No databases. And he called out Microsoft would look at Microsoft. Some people like cloud wars. Bob Evans, our good friend, claims that Microsoft been number one in the cloud for like like year, and it's just not true right. That's just not number one. He used his revenue a za benchmark. And if you look at Microsoft's revenue, bulk of it is from propped up from Windows Server and Sequel Server. They have Get up in there that's new. And then a bunch of professional services and some eyes and passed. If you look at true cloud revenue, there's not much there, Dave. They're definitely not number one. I think Jassy kind of throws a dagger in there with saying, Hey, if you're paying for licenses mawr on Amazon versus Azure that's old school shenanigans or sales tactics. And he called that out. That, to me, was pretty aggressive. And then So I finally just cove in management stuff. Democratizing machine learning. >>Let me pick up on a couple things. There actually were a number of hybrid announcements. Um, E C s anywhere E k s anywhere. So kubernetes anywhere containers anywhere smaller outposts, new local zones, announced 12 new cities, including Boston, and then Jesse rattle them off and made a sort of a joke to himself that you made that I remembered all 12 because the guy uses no notes. He's just amazing. He's up there for three hours, no notes and then new wavelength zones for for the five g edge. So actually a lot of hybrid announcements, basically, to your point redefining hybrid. Basically, bringing the cloud to the edge of which he kind of redefined the data center is just sort of another edge location. >>Well, I mean, my point was Is that my point is that he Actually, Reid said it needs to be redefined. Any kind of paused there and then went into the announcements. And, you know, I think you know, it's funny how you called out Microsoft. I was just saying which I think was really pivotal. We're gonna dig into that Babel Babel Fish Open source thing, which could be complete competitive strategy, move against Microsoft. But in a way, Dave Jassy is pulling and Amazon's pulling the same move Microsoft did decades ago. Remember, embrace and extend right Bill Gates's philosophy. This is kind of what they're doing. They have embraced hybrid. They have embraced the data center. They're extending it out. You're seeing outpost, You see, five g, You're seeing these I o t edge points. They're putting Amazon everywhere. That was my take away. They call it Amazon anywhere. I think it's everywhere. They want cloud operations everywhere. That's the theme that I see kind of bubbling out there saying, Hey, we're just gonna keep keep doing this. >>Well, what I like about it is and I've said this for a long time now that the edge is gonna be one by developers. And so they essentially taking AWS and the data center is an AP, and they're bringing that data center is an A P I virtually everywhere. As you're saying, I wanna go back to something you said about leadership and Microsoft and the numbers because I've done a lot of homework on this Aziz, you know, And so Jassy made the point. He makes this point a lot that it's not about the the actual growth rate. Yeah, the other guys, they're growing faster. But there were growing from a much larger base and I want to share with you a nuance because he said he talked about how AWS grew incrementally 10 billion and only took him 12 months. I have quarterly forecast and I've published these on Wiki Bond, a silicon angle. And if you look at the quarterly numbers and now this is an estimate, John. But for Q four, I've got Amazon growing at 25%. That's a year on year as you're growing to 46% and Google growing at 50% 58%. So Google and and Azure much, much higher growth rates that than than Amazon. But what happens when you look at the absolute numbers? From Q three to Q four, Amazon goes from 11.6 billion to 12.4 billion. Microsoft actually stays flat at around 6.76 point eight billion. Google actually drops sequentially. Now I'm talking about sequentially, even though they have 58% growth. So the point of the Jazz is making is right on. He is the only company growing at half the growth rate year on year, but it's sequential. Revenues are the only of the Big Three that are growing, so that's the law of large numbers. You grow more slowly, but you throw off more revenue. Who would you rather be? >>I think I mean, it's clearly that Microsoft's not number one. Amazon's number one cloud certainly infrastructure as a service and pass major themes in the now so we won't go through. We're digging into the analyst Sessions would come at two o'clock in three o'clock later, but they're innovating on those two. They want they one that I would call this member. Jasio says, Oh, we're in the early innings Inning one is I as and pass. Amazon wins it all. They ran the table, No doubt. Now inning to in the game is global. I t. That was a really big part of the announcement. People might have missed that. If you if you're blown away by all the technical and complexity of GP three volumes for EBS and Aurora Surveillance V two or sage maker Feature store and Data Wrangler Elastic. All that all that complex stuff the one take away is they're going to continue to innovate. And I, as in past and the new mountain that they're gonna Klima's global I t spin. That's on premises. Cloud is eating the world and a W s is hungry for on premises and the edge. You're going to see massive surge for those territories. That's where the big spend is gonna be. And that's why you're seeing a big focus on containers and kubernetes and this kind of connective tissue between the data machine layer, modern app layer and full custom. I as on the on the bottom stack. So they're kind of just marching along to the cadence of, uh, Andy Jassy view here, Dave, that, you know, they're gonna listen to customers and keep sucking it in Obama's well and pushing it out to the edge. And and we've set it on the Cube many years. The data center is just a big edge. And that's what Jassy is basically saying here in the keynote. >>Well, and when when Andy Jassy gets pushed on Well, yes, you listen to customers. What about your partners? You know, he'll give examples of partners that are doing very well. And of course we have many. But as we've often said in the Cube, John, if you're a partner in the ecosystem, you gotta move fast. There were three interesting feature announcements that I thought were very closely related to other things that we've seen before. The high performance elastic block storage. I forget the exact name of it, but SAN in a cloud the first ever SAN in the cloud it reminds me of something that pure storage did last year and accelerate so very, very kind of similar. And then the aws glue elastic views. It was sort of like snowflake's data cloud. Now, of course, AWS has many, many more databases that they're connecting, You know, it, uh, stuff like as one. But the way AWS does it is they're copying and moving data and doing change data management. So what snowflake has is what I would consider a true global mesh. And then the third one was quicksight que That reminded me of what thought spots doing with search and analytics and AI. So again, if you're an ecosystem partner, you gotta move fast and you've got to keep innovating. Amazon's gonna do what it has to for customers. >>I think Amazon's gonna have their playbooks when it's all said and done, you know, Do they eat the competition up? I think what they do is they have to have the match on the Amazon side. They're gonna have ah, game and play and let the partners innovate. They clearly need that ecosystem message. That's a key thing. Um, love the message from them. I think it's a positive story, but as you know it's Amazons. This is their Kool Aid injection moment, David. Educational or a k A. Their view of the world. My question for you is what's your take on what wasn't said If you were, you know, as were in the virtual audience, what should have been talk about? What's the reality? What's different? What didn't they hit home? What could they have done? What, your critical analysis? >>Well, I mean, I'm not sure it should have been said, but certainly what wasn't said is the recognition that multi cloud is an opportunity. And I think Amazon's philosophy or belief at the current time is that people aren't spreading workloads, same workload across multiple clouds and splitting them up. What they're doing is they're hedging bets. Maybe they're going 70 30 90 10, 60 40. But so multi cloud, from Amazon standpoint is clearly not the opportunity that everybody who doesn't have a cloud or also Google, whose no distant third in cloud says is a huge opportunity. So it doesn't appear that it's there yet, so that was I wouldn't call it a miss, but it's something that, to me, was a take away that Amazon does not currently see that there's something that customers are clamoring for. >>There's so many threads in here Were unpacked mean Andy does leave a lot of, you know, signature stories that lines in there. Tons of storylines. You know, I thought one thing that that mass Amazon's gonna talk about this is not something that promotes product, but trend allies. I think one thing that I would have loved to Seymour conversation around is what I call the snowflake factor. It snowflake built their business on Amazon. I think you're gonna see a tsunami of kind of new cloud service providers. Come on the scene building on top of AWS in a major way of like, that kind of value means snowflake went public, uh, to the level of no one's ever seen ever in the history of N Y s e. They're on Amazon. So I call that the the next tier cloud scale value. That was one thing I'd like to see. I didn't hear much about the global i t number penetration love to hear more about that and the thing that I would like to have heard more. But Jassy kind of touched a little bit on it was that, he said at one point, and when he talked about the verticals that this horizontal disruption now you and I both know we've been seeing on the queue for years. It's horizontally scalable, vertically specialized with the data, and that's kind of what Amazon's been doing for the past couple of years. And it's on full display here, horizontal integration value with the data and then use machine learning with the modern applications, you get the best of both worlds. He actually called that out on this keynote. So to me, that is a message to all entrepreneurs, all innovators out there that if you wanna change the position in the industry of your company, do those things. There's an opportunity right now to integrate with the cloud to disrupt horizontally, but then on the vertical. So that will be very interesting to see how that plays out. >>And eventually you mentioned Snowflake and I was talking about multi cloud snowflake talks about multi cloud a lot, but I don't even think what they're doing is multi cloud. I think what they're doing is building a data cloud across clouds and their abstracting that infrastructure and so to me, That's not multi Cloud is in. Hey, I run on Google or I run on the AWS or I run on Azure ITT's. I'm abstracting that making that complexity disappeared, I'm creating an entirely new cloud at scale. Quite different. >>Okay, we gotta break it there. Come back into our program. It's our live portion of Cube Live and e. K s Everywhere day. That's multi cloud. If they won't say, that's what I'll say it for them, but the way we go, more live coverage from here at reinvent virtual. We are virtual Cuban John for Dave a lot. They'll be right back.
SUMMARY :
It's the Cube with digital coverage Great to see you, Andy Jassy again. Do going Happy birthday, my friend. He acknowledged that social change, the cultural shift. I mean, that was the bulk of his presentation, And I think that to me was the biggest, that you made that I remembered all 12 because the guy uses no notes. They have embraced the data center. I've done a lot of homework on this Aziz, you know, And so Jassy made the point. And I, as in past and the new mountain that they're And then the third one was quicksight que That reminded me of what I think Amazon's gonna have their playbooks when it's all said and done, you know, Do they eat the competition And I think Amazon's philosophy or belief at So I call that the the next Hey, I run on Google or I run on the AWS or I run on Azure ITT's. If they won't say, that's what I'll say it for them, but the way we go,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy Jassy | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jasio | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Bob Evans | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Obama | PERSON | 0.99+ |
Dave Jassy | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
11.6 billion | QUANTITY | 0.99+ |
46% | QUANTITY | 0.99+ |
Reid | PERSON | 0.99+ |
three hours | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Andy | PERSON | 0.99+ |
12 months | QUANTITY | 0.99+ |
12 new cities | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
3000 miles | QUANTITY | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Jessie | PERSON | 0.99+ |
Seymour | PERSON | 0.99+ |
Jesse | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Bill Gates | PERSON | 0.99+ |
EBS | ORGANIZATION | 0.99+ |
Andy Jessie | PERSON | 0.99+ |
12.4 billion | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Five years ago | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
eight year | QUANTITY | 0.98+ |
ninth year | QUANTITY | 0.98+ |
eight billion | QUANTITY | 0.98+ |
Aziz | PERSON | 0.98+ |
one thing | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.97+ |
UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL
hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you
SUMMARY :
the end to the stage in the area we have
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Marta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris Keg | PERSON | 0.99+ |
Laura Ipsen | PERSON | 0.99+ |
Jeffrey Immelt | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chris O'Malley | PERSON | 0.99+ |
Andy Dalton | PERSON | 0.99+ |
Chris Berg | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Paul Forte | PERSON | 0.99+ |
Erik Brynjolfsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew McCafee | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Cheryl | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Marta Federici | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Wright | PERSON | 0.99+ |
Maureen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cheryl Cook | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$8,000 | QUANTITY | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
30,000 | QUANTITY | 0.99+ |
Mauricio | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Robb | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mike Nygaard | PERSON | 0.99+ |
Abhiman Matlapudi & Rajeev Krishnan, Deloitte | Informatica World 2019
>> Live from Las Vegas. It's theCUBE. Covering Informatica World 2019, brought to you by Informatica. >> Welcome back everyone to theCUBE's live coverage of Informatica World. I am your host, Rebecca Knight, along with co-host, John Furrier. We have two guests for this segment. We have Abhiman Matlapudi. He is the Product Master at Deloitte. Welcome. >> Thanks for having us. >> And we have Kubalahm Rajeev Krishnan, Specialist Leader at Deloitte. Thank you both so much for coming on theCUBE. >> Thanks Rebecca, John. It's always good to be back on theCUBE. >> Love the new logos here, what's the pins? What's the new take on those? >> It looks like a honeycomb! >> Yeah, so interesting that you ask, so this is our joined Deloitte- Informatica label pin. You can see the Deloitte green colors, >> Nice! They're beautiful. >> And the Informatica colors. This shows the collaboration, the great collaboration that we've had over, you know, the past few years and plans, for the future as well. Well that's what we're here to talk about. So why don't you start the conversation by telling us a little bit about the history of the collaboration, and what you're planning ahead for the future. Yeah. So, you know, if we go like you know, ten years back the collaboration between Deloitte and Informatica has not always been that, that strong and specifically because Deloitte is a huge place to navigate, and you know, in order to have those meaningful collaborations. But over the past few years, we've... built solid relationships with Informatica and vise versa. I think we seek great value. The clear leaders in the Data Management Space. It's easy for us to kind of advise clients in terms of different facets of data management. You know, because no other company actually pulls together you know, the whole ecosystem this well. >> Well you're being polite. In reality, you know where it's weak and where it's real. I mean, the reality is there's a lot of fun out there, a lot of noise, and so, I got to ask you, cause this is the real question, because there's no one environment that's the same. Customers want to get to the truth faster, like, where's the deal? What's the real deal with data? What's gettable? What's attainable? What's aspirational? Because you could say "Hey, well I make data, data-driven organization, Sass apps everywhere." >> Yeah. Yeah absolutely. I mean every, every company wants to be more agile. Business agility is what's driving companies to kind of move all of their business apps to the Cloud. The uh, problem with that is that, is that people don't realize that you also need to have your data management governance house in order, right, so according to a recent Gartner study, they say by next year, 75% of companies who have moved their business apps to the Cloud, is going to, you know, unless they have their data management and data assets under control, they have some kind of information governance, that has, you know, context, or purview over all of these business apps, 50% of their data assets are going to erode in value. So, absolutely the need of the hour. So we've seen that great demand from our clients as well, and that's what we've been advising them as well. >> What's a modern MDM approach? Because this is really the heart of the conversation, we're here at Informatica World. What's- What does it look like? What is it? >> So I mean, there are different facets or functionalities within MDM that actually make up what is the holistic modern MDM, right. In the past, we've seen companies doing MDM to get to that 360-degree view. Somewhere along the line, the ball gets dropped. That 360 view doesn't get combined with your data warehouse and all of the transaction information, right, and, you know, your business uses don't get the value that they were looking for while they invested in that MDM platform. So in today's world, MDM needs to provide front office users with the agility that they need. It's not about someone at the back office doing some data stewardship. It's all about empowering the front office users as well. There's an aspect of AIML from a data stewardship perspective. I mean everyone wants cost take out, right, I mean there's fewer resources and more data coming in. So how how do you manage all of the data? Absolutely you need to have AIML. So Informatica's CLAIRE product helps with suggestions and recommendations for algorithms, matching those algorithms. Deloitte has our own MDM elevate solution that embeds AIML for data stewardship. So it learns from human data inputs, and you know, cuts through the mass of data records that have to be managed. >> You know Rajeev, it was interesting, last year we were talking, the big conversation was moving data around is really hard. Now there's solutions for that. Move the data integrity on premise, on Cloud. Give us an update on what's going on there, because there seems to be a lot of movement, positive movement, around that. In terms of, you know, quality, end to end. We heard Google up here earlier saying "Look, we can go into end to end all you want". This has been a big thing. How are you guys handling this? >> Yeah absolutely, so in today's key note you heard Anil Chakravarthy and Thomas Green up on the stage and Anil announced MDM on GCP, so that's an offering that Deloitte is hosting and managing. So it's going to be an absolutely white-glove service that gives you everything from advice to implement to operate, all hosted on GCP. So it's a three-way ecosystem offering between Deloitte, Informatica, and GCP. >> Well just something about GCP, just as a side note before you get there, is that they are really clever. They're using Sequel as a way to abstract all the under the hood kind of configuration stuff. Smart move, because there's a ton of Sequel people out there! >> Exactly. >> I mean, it's not structured query language for structured data. It's lingua franca for data. They've been changing the game on that. >> Exactly, it should be part of their Cloud journey. So organizations, when they start thinking about Cloud, first of all, what they need to do is they have to understand where all the data assets are and they read the data feeds coming in, where are the data lakes, and once they understand where their datas are, it's not always wise, or necessary to move all their data to the Cloud. So, Deloitte's approach or recommendation is to have a hybrid approach. So that they can keep some of their legacy datas, data assets, in the on premise and some in the Cloud applications. So, Informatica, MDM, and GCP, powered by Deloitte, so it acts as an MDM nimble hub. In respect of where your data assets are, it can give you the quick access to the data and it can enrich the data, it can do the master data, and also it can protect your data. And it's all done by Informatica. >> Describe what a nimble hub is real quick. What does a nimble hub mean? What does that mean? >> So it means that, in respect of wherever your data is coming in and going out, so it gives you a very light feeling that the client wouldn't know. All we- Informatica, MDM, on GCP powered by Deloitte, what we are saying is we are asking clients to just give the data. And everything, as Rajeev said, it's a white-glove approach. It's that from engagement, to the operation, they will just feel a seamless support from Deloitte. >> Yeah, and just to address the nimbleness factor right, so we see clients that suddenly need to get into new market, or they want to say, introduce a new product, so they need the nimbleness from a business perspective. Which means that, well suddenly you've got to like scale up and down your data workloads as well, right? And that's not just transactional data, but master data as well. And that's where the Cloud approach, you know, gives them a positive advantage. >> I want to get back to something Abhiman said about how it's not always wise or necessary to move to the Cloud. And this is a debate about where do you keep stuff. Should it be on on prem, and you said that Deloitte recommends a hybrid approach and I'm sure that's a data-driven recommendation. I'm wondering what evidence you have and what- why that recommendation? >> So, especially when it depends on the applications you're putting on for MDM, and the sources and data is what you are trying to get, for the Informatica MDM to work. So, it's not- some of your social systems are already tied up with so many other applications within your on premise, and they don't want to give every other data. And some might have concerns of sending this data to the Cloud. So that's when you want to keep those old world legacy systems, who doesn't want to get upgrades, to your on premise, and who are all Cloud-savy and they can all starting new. So they can think of what, and which, need a lot of compute power, and storage. And so those are the systems we want to recommend to the Cloud. So that's why we say, think where you want to move your data bases. >> And some of it is also driven by regulation, right, like GDPR, and where, you know, which providers offer in what countries. And there's also companies that want to say "Oh well my product strategy and my pricing around products, I don't want to give that away to someone." Especially in the high tech field, right. Your provider is going to be a confidere. >> Rajeev, one of the things I'm seeing here in this show, is clearly that the importance of the Cloud should not be understated. You see, and you guys, you mentioned you get the servers at Google. This is changing not just the customers opportunity, but your ability to service them. You got a white-glove service, I'm sure there's a ton more head room. Where do you guys see the Cloud going next? Obviously it's not going away, and the on premise isn't going away. But certainly, the importance of the Cloud should not be understated. That's what I'm hearing clearly. You see Amazon, Azure, Google, all big names with Informatica. But with respect to you guys, as you guys go out and do your services. This is good for business. For you guys, helping customers. >> Yeah absolutely, I think there's value for us, there's value for our clients. You know, it's not just the apps that are kind of going to the Cloud, right? I mean you see all data platforms that are going to the Cloud. For example, Cloudera. They just launched CDP. Being GA by July- August. You know, Snowflake's on the Cloud doing great, getting good traction in the market. So eventually what were seeing is, whether it's business applications or data platforms, they're all moving to the Cloud. Now the key things to look out for in the future is, how do we help our clients navigate a multi Cloud environment, for example, because sooner or later, they wouldn't want to have all of their eggs invested in one basket, right? So, how do we help navigate that? How do we make that seamless to the business user? Those are the challenges that we're thinking about. >> What's interesting about Databricks and Snowflake, you mentioned them, is that it really is a tell sign that start-ups can break through and crack the enterprise with Cloud and the ecosystem. And you're starting to see companies that have a Sass-like mindset with technology. Coming into an enterprise marketed with these ecosystems, it's a tough crowd believe me, you know the enterprise. It's not easy to break into the enterprise, so for Databricks and Snowflake, that's a huge tell sign. What's your reaction to that because it's great for Informatica because it's validation for them, but also the start-ups are now growing very fast. I mean, I wouldn't call Snowflake 3 billion dollar start-up their unicorn but, times three. But it's a tell sign. It's just something new we haven't seen. We've seen Cloudera break in. They kind of ramped their way in there with a lot of raise and they had a big field sales force. But Data Bear and Snowflake, they don't have a huge set in the sales force. >> Yeah, I think it's all about clients and understanding, what is the true value that someone provides. Is it someone that we can rely on to keep our data safe? Do they have the capacity to scale? If you can crack those things, then you'll be in the market. >> Who are you attracting to the MDM on Google Cloud? What's the early data look like? You don't have to name names, but whats some of the huge cases that get the white glove service from Deloitte on the Google Cloud? Tell us about that. Give us more data on that. >> So we've just announced that, here at Informatica World, we've got about three to four mid to large enterprises. One large enterprise and about three mid-size companies that are interested in it. So we've been in talks with them in terms of- and that how we want to do it. We don't want to open the flood gates. We'd like to make sure it's all stable, you know, clients are happy and there's word of mouth around. >> I'm sure the end to end management piece of it, that's probably attractive. The end to end... >> Exactly. I mean, Deloitte's clearly the leader in the data analytics space, according to Gartner Reports. Informatica is the leader in their space. GCP has great growth plans, so the three of them coming together is going to be a winner. >> One of the most pressing challenges facing the technology industry is the skills gap and the difficulty in finding talent. Surveys show that I.T. managers can't find qualified candidates for open Cloud roles. What are Deloitte's thought on this and also, what are you doing as a company to address it? >> I mean, this is absolutely a good problem to have, for us. Right, which means that there is a demand. But unless we beat that demand, it's a problem. So we've been taking some creative ways, in terms of addressing that. An example would be our analytics foundry offering, where we provide a pod of people that go from data engineers you know, with Python and Sparks skills, to, you know, Java associates, to front end developers. So a whole stack of developers, a full stack, we provide that full pod so that they can go and address a particular business analytics problem or some kind of visualization issues, in terms of what they want to get from the data. So, we teach Leverate that pod, across multiple clients, I think that's been helping us. >> If you could get an automated, full time employee, that would be great. >> Yeah, and this digital FD concept is something that we'd be looking at, as well. >> I would like to add on that, as well. So, earlier- with the data disruption, Informatica's so busy and Informatica's so busy that Deloitte is so busy. Now, earlier we used plain Informatica folks and then, later on because of the Cloud disruption, so we are training them on the Cloud concepts. Now what the organizations have to think, or the universities to think is that having the curriculum, the Cloud concepts in their universities and their curriculum so that they get all their Cloud skills and after, once they have their Cloud skills, we can train them on the Informatica skills. And Informatica has full training on that. >> I think it's a great opportunity for you guys. We were talking with Sally Jenkins to the team earlier, and the CEO. I was saying that it reminds me of early days of VMware, with virtualization you saw the shift. Certainly the economics. You replaced servers, do a virtual change to the economics. With the data, although not directly, it's a similar concept where there's new operational opportunities, whether it's using leverage in Google Cloud for say, high-end, modern data warehousing to whatever. The community is going to respond. That's going to be a great ecosystem money making opportunity. The ability to add new services, give you guys more capabilities with customers to really move the needle on creating value. >> Yeah, and it's interesting you mention VMware because I actually helped, as VMware stood up there, VMCA, AW's and NSA's offerings on the Cloud. We actually helped them get ready for that GA and their data strategy, in terms of support, both for data and analytics friendliness. So we see a lot of such tech companies who are moving to a flexible consumption service. I mean, the challenges are different and we've got a whole practice around that flex consumption. >> I'm sure Informatica would love the VMware valuation. Maybe not worry for Dell technology. >> We all would love that. >> Rajeem, Abhiman, thank you so much for joining us on theCube today. >> Thank you very much. Good talking to you. >> I'm Rebecca Knight for John Furrier. We will have more from Informatica World tomorrow.
SUMMARY :
brought to you by Informatica. He is the Product Master at Deloitte. Thank you both so much for coming on theCUBE. It's always good to be back on theCUBE. Yeah, so interesting that you ask, They're beautiful. to navigate, and you know, I mean, the reality is there's a lot of fun out there, is that people don't realize that you also need What does it look like? and all of the transaction information, right, "Look, we can go into end to end all you want". So it's going to be an absolutely white-glove service just as a side note before you get there, They've been changing the game on that. and it can enrich the data, What does that mean? It's that from engagement, to the operation, And that's where the Cloud approach, you know, and you said that Deloitte recommends a hybrid approach think where you want to move your data bases. right, like GDPR, and where, you know, is clearly that the importance of the Cloud Now the key things to look out for in the future is, and crack the enterprise with Cloud and the ecosystem. Do they have the capacity to scale? What's the early data look like? We'd like to make sure it's all stable, you know, I'm sure the end to end management piece of it, the data analytics space, according to Gartner Reports. One of the most pressing challenges facing the I mean, this is absolutely a good problem to have, for us. If you could get an automated, full time employee, Yeah, and this digital FD concept is something that the Cloud concepts in their universities and their and the CEO. Yeah, and it's interesting you mention VMware because I'm sure Informatica would love the VMware valuation. thank you so much for joining us on theCube today. Thank you very much. I'm Rebecca Knight for John Furrier.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephane Monoboisset | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Teresa Tung | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Jamie | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jamie Sharath | PERSON | 0.99+ |
Rajeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeremy | PERSON | 0.99+ |
Ramin Sayar | PERSON | 0.99+ |
Holland | LOCATION | 0.99+ |
Abhiman Matlapudi | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Rajeem | PERSON | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Rajeev Krishnan | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Sally Jenkins | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Stephane | PERSON | 0.99+ |
John Farer | PERSON | 0.99+ |
Jamaica | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Abhiman | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
130% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
183% | QUANTITY | 0.99+ |
14 million | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
38% | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
24 million | QUANTITY | 0.99+ |
Theresa | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Accelize | ORGANIZATION | 0.99+ |
32 million | QUANTITY | 0.99+ |
Sandeep Singh, HPE | CUBEConversation, May 2019
from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation welcome to the cube studios for another cube conversation where we go in-depth with thought leaders driving business outcomes with technology I'm your host Peter Burris one of the challenges enterprises face as they consider the new classes of applications that they are going to use to create new levels of business value is how to best deploy their data in ways that don't add to the overall complexity of how the business operates and to have that conversation we're here with Sandeep Singh who's the VP of storage marketing at HPE Sandeep welcome to the cube Peter thank you I'm very excited so Sandeep I started off by making the observation that we've got this mountain of data coming in a lot of enterprises at the same time there seems to be a the the notion of how data is going to create new classes of business value seems to be pretty deeply ingrained and acculturated to a lot of decision makers so they want more value out of their data but they're increasingly concerned about the volume of data that's going to hit them how in your conversations with customers are you hearing them talk about this fundamental challenge and so that that's a great question you know across the board data is at the heart of applications pretty much everything that organizations do and when they look at it in conversations with customers it really boils down to a couple of areas one is how is my data just effortlessly available all the time it's always fast because fundamentally that's driving the speed of my business and that's incredibly important and how can my various audiences including developers just consume it like the public cloud in a self-service fashion and then the second part of that conversation is really about this massive data storm or mountain of data that's coming and it's gonna be available how do how do I Drive a competitive advantage how do i unlock these hidden inside in that data to uncover new revenue streams new customer experiences those are the areas that we hear about and fundamentally underlying it the challenge for customers is boy I have a lot of complexity and how do I ensure that I have the necessary insights in a the infrastructure management so I am not beholden and more my IT staff isn't beholden to fighting the IT fires that can cause disruptions and delays to projects so fundamentally we want to be able to push time and attention in the infrastructure in the administration of those devices that handle the data and move that time and attention up into how we deliver the data services and ideally up into the applications that are going to actually generate dense new class of work within a digital business so I got that right absolutely it's about infrastructure that just runs seamlessly it's always on it's always fast people don't have to worry about what is it gonna go down is my data available or is it gonna slow down people don't want sometimes faster one always fast right and that's governing the application performance that ultimately I can deliver and you talked about well geez if it if the data infrastructure just works seamlessly then can I eventually get to the applications and building the right pipelines ultimately for mining that data drive doing the AI and the machine learning analytics driven insights from that so we've got the significant problem we now have to figure out how to architect because we want predictability and certainty and and and cost clarity and to how we're going to do this part of the challenge or part of the pushier is new use cases for AI so we're trying to push data up so that we can build these new use cases but it seems as though we have to also have to take some of those very same technologies and drive them down into the infrastructure so we get greater intelligence greater self meter and greater self management self administration within the infrastructure itself oh I got that right yes absolutely lay what becomes important for customers is when you think about data and ultimately storage that underlies the data is you can build and deploy fast and reliable storage but that's only solving half the problem greater than 50% of the issues actually end up arising from the higher layers for example you could change the firmware on the host bus adapter inside a server that can trickle down and cause a data unavailability or a performance low down issue you need to be able to predict that all the way at that higher level and then prevent that from occurring or your virtual machines might be in a state of over memory commitment at the server level or you could CPU over-commitment how do you discover those issues and prevent them from happening the other area that's becoming important is when we talk about this whole notion of cloud and hybrid cloud right that complexity tends to multiply exponentially so when the smarts you guys are going after building that hybrid cloud infrastructure fundamental challenges even as I've got a new workload and I want to place that you even on-premises because you've had lots of silos how do you even figure out where should I place a workload a and how it'll react with workloads B and C on a given system and now you multiply that across hundreds of systems multiple clouds and the challenge you can see that it's multiplying exponentially oh yeah well I would say that having you know where do I put workload a the right answer today maybe here but the right answer tomorrow maybe somewhere else and you want to make sure that the service is right required to perform workload a our resident and available without a lot of administrative work necessary to ensure that there's commonality that's kind of what we mean by this hybrid multi-cloud world isn't it absolutely and yet when you start to think about it basically you end up in requiring and fundamentally meeting the data mobility aspect of it because without the data you can't really move your workloads and you need consistency of data services so that your app if it's architected for reliability and a set of data services those just go along with the application and then you need building on top of that the portability for your actual application workload consistently managed with a hybrid management interface there so we want to use an intelligent data platform that's capable of assuring performance assuring availability and assuring security and going beyond that to then deliver a simplified automated experience right so that everything is just available through a self-service interface and then it brings along a level of intelligence that's just built into it globally so that in instead of trying to manually predict and landing in a world of reactive after IT fires have occurred is that there are sea of sensors and it's automatic the infrastructures automatically for predicting and preventing issues before they ever occur and then going beyond that how can you actually fingerprint the individual application workloads to then deliver prescriptive insights right to keep the infrastructure always optimized in that sense so discerning the patterns of data utilization so that the administrative costs of making sure the data is available where it needs to be number one number two assuring that data as assets is made available to developers as they create new applications new new things that create new work but also working very closely with the administrators so that they are not bound to as an explosion of the number of tasks adapt to perform to keep this all working across the board yes ok so we've got we've we've got a number of different approaches to how this class of solution is going to hit the marketplace look HP he's been around for 70 years yeah something along those lines you've been one of the leaders in the complex systems arena for a long time and that includes storage where are you guys taking some of these to oh geez yeah so our strategy is to deliver an intelligent data platform and that intelligent data platform begins with workload optimized composable systems that can span the mission critical workloads general purpose secondary Big Data ai workloads we also deliver cloud data services that enable you to embrace hybrid cloud all of these systems including all the way to Cloud Data Services are plumbed with data mobility so for example use cases of even modernizing protection and going all the way to protecting cost effectively in the public cloud are enabled but really all of these systems then are imbued with a level of intelligence with a global intelligence engine that begins with predicting and proactively resolving issues before they occur but it goes way beyond that in delivering these prescriptive insights that are built on top of global learning across hundreds of thousands of systems with over a billion data points coming in on a daily basis to be able to deliver at the information at the fingertips so even the virtual machine admins to say this virtual machine is sapping the performance of this node and if you were to move it to this other node the performance or the SLA for all of the virtual machine farm will be even better we build on top of that to deliver pre-built automation so that it's hooked in with a REST API for strategy so that developers can consume it in a containerized application that's orchestrated with kubernetes or they can leverage it as infrastructure eyes code whether it's with ansible puppet or chef we accelerate all of the application workloads and bring up where data protection so it's available for the traditional business applications whether they're built on SA P or Oracle or sequel or the virtual machine farms or the new stack containerized applications and then customers can build their ai and big data pipelines on top of the infrastructure with a plethora of tools whether they're using basically Kafka elastic map are h2o that complete flexibility exists and within HPE were then able to turn around and deliver all of this with an as a service experience with HPE Green Lake to customers so that's where I want to take you next so how invasive is this going to be to a large shop well it is completely seamless in that way so with Green Lake we're able to deliver a fully managed service experience with a cloud like pay-as-you-go consumption model and combining it with HPE financial services we're also able to transform their organization in terms of this journey and make it a fully self-funding journey as well so today the typical administrator of this typical shop has got a bunch of administrators that are administrating devices that's starting to change they've introduced automation that typically is associated with those devices but in we think three to five years out folks gonna be thinking more in terms of data services and how those services get consumed and that's going to be what the storage part of I t's can be thinking about it can almost become day to administrators if I got that right yes intelligence is fundamentally changing everything not only on the consumer side but on the business side of it a lot of what we've been talking about is intelligence is the game changer we actually see the dawn of the intelligence era and through this AI driven experience what it means for customers as a it enables a support experience that they just absolutely love secondly it means that the infrastructure is always on it's always fast it's always optimized in that sense and thirdly in terms of making these data services that are available and data insights that are being unlocked it's all about how can enable your innovators and the data scientists and the data analysts to shrink that time to deriving insights from months literally down to minutes today there's this chasm that exists where there's a great concept of how can i leverage the AI technology and between that concept to making it real to thinking about a where can it actually fit and then how do i implement an end-to-end solution and a technology stack so that I just have a pipeline that's available to me that chasm you literally as a matter of months and what we're able to deliver for example with HPE blue data is literally a catalog self-service experience where you can select and seamlessly build a pipeline literally in a matter of minutes and it's just all completely hosted seamlessly so making AI and machine learning essentially available for the mainstream through so the ontology data platform makes it possible to see these new classes of applications become routine without forcing the underlying storage administrators themselves to become data scientists absolutely all right well thank you for joining us for another cute conversation Sandeep Singh really appreciate your time in the cube thank you Peter and fundamentally what we're helping customers do is really to unlock data potential to transform their businesses and we look forward to continuing that conversation excellent I'm Peter Burris see you next time you [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sandeep Singh | PERSON | 0.99+ |
May 2019 | DATE | 0.99+ |
Peter Burris | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
today | DATE | 0.99+ |
second part | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
Kafka | TITLE | 0.98+ |
70 years | QUANTITY | 0.98+ |
over a billion data points | QUANTITY | 0.98+ |
hundreds of systems | QUANTITY | 0.97+ |
greater than 50% | QUANTITY | 0.97+ |
Green Lake | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.96+ |
hundreds of thousands of systems | QUANTITY | 0.96+ |
Sandeep | PERSON | 0.92+ |
Palo Alto California | LOCATION | 0.92+ |
HP | ORGANIZATION | 0.9+ |
HPE Green Lake | ORGANIZATION | 0.85+ |
one | QUANTITY | 0.81+ |
SA P | TITLE | 0.8+ |
sea of sensors | QUANTITY | 0.8+ |
half the problem | QUANTITY | 0.77+ |
secondly | QUANTITY | 0.74+ |
Oracle | ORGANIZATION | 0.72+ |
two | QUANTITY | 0.7+ |
lots of silos | QUANTITY | 0.68+ |
one of the leaders | QUANTITY | 0.62+ |
intelligence | ORGANIZATION | 0.61+ |
thirdly | QUANTITY | 0.55+ |
sequel | TITLE | 0.54+ |
HPE | TITLE | 0.54+ |
lot | QUANTITY | 0.52+ |
issues | QUANTITY | 0.51+ |
REST | TITLE | 0.49+ |
h2o | TITLE | 0.46+ |
Keynote | Red Hat Summit 2019 | DAY 2 Morning
>> Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Paul Cormier. Boring. >> Welcome back to Boston. Welcome back. And welcome back after a great night last night of our opening with with Jim and talking to certainly saw ten Jenny and and especially our customers. It was so great last night to hear our customers in how they set their their goals and how they met their goals. All possible because certainly with a little help from red hat, but all possible because of because of open source. And, you know, sometimes we have to all due that has set goals. And I'm going to talk this morning about what we as a company and with community, have set for our goals along the way. And sometimes you have to do that. You know, audacious goals. It can really change the perception of what's even possible. And, you know, if I look back, I can't think of anything, at least in my lifetime, that's more important. Or such a big golden John F. Kennedy setting the gold to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about in compare and contrast, a little bit of where we are technically at that time, you know, tto win and to beat and winning the space race and even get into the space race. There was some really big technical challenges along the way. I mean, believe it or not. Not that long ago. But even But back then, math Malik mathematical calculations were being shifted from from brilliant people who we trusted, and you could look in the eye to A to a computer that was programmed with the results that were mostly printed out. This this is a time where the potential of computers was just really coming on the scene and, at the time, the space race at the time of space race it. It revolved around an IBM seventy ninety, which was one of the first transistor based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges And while we had the goal of in the beginning of the technique and the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case. Back in nineteen sixty nine, the seven individuals that made up the Mercury Space crew were putting their their lives in the hands of those first computers. But on Sunday, July twentieth, nineteen sixty nine, these things all came together. The goal, the technology in the team and a human being walked on the moon. You know, if this was possible fifty years ago, just think about what Khun B. Accomplished today, where technology is part of our everyday lives. And with technology advances at an ever increasing rate, it's hard to comprehend the potential that sitting right at our fingertips every single day, everything you know about computing is continuing to change. Today, let's look a bit it back. A computing In nineteen sixty nine, the IBM seventy ninety could process one hundred thousand floating point operations per second, today's Xbox one that sitting in most of your living rooms probably can process six trillion flops. That's sixty million times more powerful than the original seventy ninety that helped put a human being on the moon. And at the same time that computing was, that was drastically changed. That this computing has drastically changed. So have the boundaries of where that computing sits and where it's been where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center, and over time that grew to multiple data centers. Then with cloud, it extended all the way out to data centers that you didn't even own or have control of. But but computing now reaches far beyond any data center. This is also referred to as the edge. You hear a lot about that. The Apollo's, the Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed seventy pounds embedded in the capsule. Today, today the edge is right here on my wrist. This apple watch weighs just a couple of ounces, and it's ten ten thousand times more powerful than that seventy ninety back in nineteen sixty nine But even more impactful than computing advances, combined with the pervasive availability of it, are the changes and who in what controls those that similar to social changes that have happened along the way. Shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms. Operational control was your team, your team within your control? In some cases, a single person managed everything. But as complexity grows, our team's expanded, just like in the just like in the computing boundaries, system integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions going forward with the progress of things like a I and software defined everything. It's quite likely that machines will be managing machines, and in many cases that's already happening today. But while the technology at our finger tips today is so impressive, the pace of changing complexity of the problems we aspire to solve our equally hard to comprehend and they are all intertwined with one another learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity beyond anyone beyond what any one single company or even one single country Khun solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open sources so needed today, even in the world, to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction. Today is to bring two brother to bring together the best innovation from every corner of the planet. Toe fundamentally change how we solve problems. This approach and access the innovation is what has enabled open source To tackle The challenge is big challenges, like creating the hybrid cloud like building a truly open hybrid cloud. But even today it's really difficult to bridge the gap of the innovation. It's available in all in all of our fingertips by open source development, while providing the production level capabilities that are needed to really dip, ploy this in the enterprise and solve RIA world business problems. Red Hat has been committed to open source from the very, very beginning and bringing it to solve enterprise class problems for the last seventeen plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway tow harness the innovation. We had to fully embrace the model. We made a decision very early on. Give everything back and we live by that every single day. We didn't do crazy crazy things like you hear so many do out there. All this is open corps or everything below. The line is open and everything above the line is closed. We didn't do that, and we gave everything back Everything we learned in the process of becoming an enterprise class technology company. We gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible within open source development model we've been building on the foundation of open source is most successful Project Lennox in the architecture of the future hybrid and bringing them to the Enterprise. This is what made Red Hat, the company that we are today and red hats journey. But we also had the set goals, and and many of them seemed insert insurmountable at the time, the first of which was making Lennox the Enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the Enterprise was rail two dot one. Yes, I know we two dot one, but we knew we couldn't release a one dato product. We knew that and and we didn't. But >> we didn't want to >> allow any reason why anyone of any customer anyone shouldn't should look past rail to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and Big Eyes v partners that supported Rehl out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our on our priorities. In the early days of rail, I remember going to ask one of our engineers for a past rehl build because we were having a customer issue on it on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it here It is told me not to worry that the build this was he thinks this was the bill. This was the right one, and at that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. The not only convinced the world that Lennox was secure, stable, an enterprise ready, but also to make that a reality. But we did. And today this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise, Lennox has continually risen to the challenge and has become the core foundation that many mission critical customers run and bet their business on. And an even bigger today Lennox is the foundation of which practically every single technology initiative is built upon. Lennox is not only standard toe build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. We started our story with rail two dot one, and here we are today, seventeen years later, announcing rally as we did as we did last night. It's specifically designed for applications to run across the open hybrid. Clyde Cloud. Railed has become the best operating simp system for on premise all the way out to the cloud, providing that common operating model and workload foundation on which to build hybrid applications. Let's take it. Let's take a look at how far we've come and see this in action. >> Please welcome Red Hat Global director of developer experience, burst Sutter with Josh Boyer, Timothy Kramer, Lars Carl, it's Key and Brent Midwood. All right, we have some amazing things to show you. In just a few short moments, we actually have a lot of things to show you. And actually, Tim and Brandt will be with us momentarily. They're working out a few things in the back because we have a lot of this is gonna be a live demonstration, some incredible capabilities. Now you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vast cities. You're free to manage many, many machines. I want you thinking about that as we go to this process. Now, also, keep in mind that this is the basis our core platform for everything we do here. Red hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize the many of you in the audience right now. Her hand's on systems administrators, systems, architect, citizens, engineers. And we know that you're under ever growing pressure to deliver needed infrastructure. Resource is ever faster, and that is a key element to what you're thinking about every day. Well, this has been a core theme, and our design decisions find red Odd Enterprise Lennox eight and intelligent operating system, which is making it fundamentally easier for you manage machines that scale. So hold what you're about to see next. Feels like a new superpower and and that redhead azure force multiplier. So first, let me introduce you to a large. He's totally my limits guru. >> I wouldn't call myself a girl, but I I guess you could say that I want to bring Lennox and light meant to more people. >> Okay, Well, let's let's dive in. And we're not about the clinic's eight. >> Sure. Let me go. And Morgan, >> wait a >> second. There's windows. >> Yeah, way Build the weft Consul into Really? That means that for the first time, you can log in from any device including your phone or this standard windows laptop. So you just go ahead and and to my Saturday lance credentials here. >> Okay, so now >> you're putting >> your limits password and over the web. >> Yeah, that might sound a bit scary at first, but of course, we're using the latest security tech by T. L s on dh csp on. Because that's the standard Lennox off site. You can use everything that you used to like a stage keys, OTP, tokens and stuff like this. >> Okay, so now I see the council right here. I love the dashboard overview of the system, but what else can you tell us about this council? >> Right? Like right here. You see the load of the system, some some of its properties. But you can also dive into logs everything that you're used to from the command line, right? Or lookit, services. This's all the services I've running, can start and stuff them and enable >> OK, I love that feature right there. So what about if I have to add a whole new application to this environment? >> Good that you're bringing that up. We build a new future into hell called application streams. Which the way for you to install different versions of your half stack that are supported I'LL show you with Youngmin a command line. But since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the Web console Since the browser, I can even make this a bit bigger. Go to, for example, to see the application streams that we have for Poskus. Ijust do module list and I see you know we have ten and nine dot six Both supported tennis a default on defy enable ninety six Now the next time that I installed prescribes it will pull all their lady towards from them at six. >> Ok, so this is very cool. I see two verses of post Chris right here What tennis to default. That is fantastic and the application streams making that happen. But I'm really kind of curious, right? I loved using know js and Java. So what about multiple versions of those? >> Yeah, that's exactly the idea way. Want to keep up with the fast moving ecosystems off programming language? Isn't it a business? >> Okay, now, But I have another key question. I know some people were thinking it right now. What about Python? >> Yeah. In fact, in a minimum and still like this, python gives you command. Not fact. Just have to type it correctly. You can't just install which everyone you want two or three or whichever your application needs. >> Okay, Well, that is I've been burned on that one before. Okay, so no actual. Have a confession for all you guys. Right here. You guys keep this amongst yourselves. Don't let Paul No, I'm actually not a linnet systems administrator. I'm an application developer, an application architect, And I recently had to go figure out how to extend the file system. This is for real. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, extend resized to f s. And I have to admit, that's hard, >> right? I've opened the storage space for you right here, where you see an overview of your storage. And the council has made for people like you as well not only for people that I knew that when you two lunatics, right? It's if you're running, you're running some of the commands only, you know, some of the time you don't remember them. So, for example, I haven't felt twosome here. That's a little bit too small. Let me just throw it. It's like, you know, dragging this lighter. It calls all the command in the background for you. >> Oh, that is incredible. Is that simple? Just drag and drop. That is fantastic. Well, so I actually, you know, we'll have another question for you. It looks like now this linen systems administration is no longer a dark heart involving arcane commands typed into a black terminal. Like using when those funky ergonomic keyboards you know I'm talking about right? Do >> you know a lot of people, including me and people in the audience like that dark out right? And this is not taking any of that away. It's on additional tool to bring limits to more people. >> Okay, well, that is absolute fantastic. Thank you so much for that Large. And I really love him installing everything is so much easier, including a post gra seeker and, of course, the python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Lenox system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So, Josh, how would I create a golden image? One VM image that can use and we have everything pre baked in? >> Yeah, absolutely. But >> we get that question all the time. So really includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate new system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints or what we used to actually control it goes into our golden image. Uh, and I heard you and Lars talking about post present python. So I went and started typing here. So it brings us to this page, but you could go to the selected components, and you can see here I've created a blueprint that has all the python and post press packages in it. Ah, and the interesting thing about this is it build on our existing kickstart technology. But you can use it to deploy that whatever cloud you want. And it's saved so that you don't actually have to know all the various incantations from Amazon toe azure to Google, whatever it's all baked in on. When you do this, you can actually see the dependencies that get brought in as well. Okay. Should we create one life? Yes, please. All right, cool. So if we go back to the blueprints page and we click create blueprint Let's, uh let's make a developer brute blueprint here. So we click great, and you can see here on the left hand side. I've got all of my content served up by Red Hat satellite. We have a lot of great stuff, and really, But we can go ahead and search. So we'LL look for post grows and you know, it's a developer image at the client for some local testing. Um, well, come in here and at the python bits. Probably the development package. We need a compiler if we're going to actually build anything. So look for GCC here and hey, what's your favorite editor? >> A Max, Of course, >> Max. All right. Hey, Lars, about you. I'm more of a person. You Maxim v I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. But we're going to go ahead and Adam Ball, sweetie, I'm a fight on stage. So wait, just point and click. Let the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. >> Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily deploys of deploy this across multiple cloud providers. And as well as this on stage are where we have right now. >> Yeah, absolutely. We can to play on Amazon as your google any any infrastructure you're looking for so you can really hit your Clyburn hybrid cloud operating system images. >> Okay. All right, listen, we >> just go on, click, create image. Uh, we can select our different types here. I'm gonna go ahead and create a local VM because it's available image, and maybe they want to pass it around or whatever, and I just need a few moments for it to build. >> Okay? So while that's taking a few moments, I know there's another key question in the minds of the audience right now, and you're probably thinking I love what I see. What Right eye right hand Priceline say. But >> what does it >> take to upgrade from seven to eight? So large can you show us and walk us through an upgrade? >> Sure, this's my little Thomas Block that I set up. It's powered by what Chris and secrets over, but it's still running on seven six. So let's upgrade that jump over to my house fee on satellite on. You see all my relate machines here, including the one I showed you what Consul on before. And there is that one with my sun block and there's a couple others. Let me select those as well. This one on that one. Just go up here. Schedule remote job. And she was really great. And hit Submit. I made it so that it makes the booms national before. So if anything was wrong Kans throwback! >> Okay, okay, so now it's progressing. Here, >> it's progressing. Looks like it's running. Doing >> live upgrade on stage. Uh, >> seems like one is failing. What's going on here? Okay, we checked the tree of great Chuck. Oh, yeah, that's the one I was playing around with Butter fest backstage. What? Detective that and you know, it doesn't run the Afghan cause we don't support operating that. >> Okay, so what I'm hearing now? So the good news is, we were protected from possible failed upgrade there, So it sounds like these upgrades are perfectly safe. Aiken, basically, you know, schedule this during a maintenance window and still get some sleep. >> Totally. That's the idea. >> Okay, fantastic. All right. So it looks like upgrades are easy and perfectly safe. And I really love what you showed us there. It's good point. Click operation right from satellite. Ok, so Well, you know, we were checking out upgrades. I want to know Josh. How those v ems coming along. >> They went really well. So you were away for so long. I got a little bored and I took some liberties. >> What do you mean? >> Well, the image Bill And, you know, I decided I'm going to go ahead and deploy here to this Intel machine on stage Esso. I have that up and running in the web. Counsel. I built another one on the arm box, which is actually pretty fast, and that's up and running on this. Our machine on that went so well that I decided to spend up some an Amazon. So I've got a few instances here running an Amazon with the web console accessible there as well. On even more of our pre bill image is up and running an azure with the web console there. So the really cool thing about this bird is that all of these images were built with image builder in a single location, controlling all the content that you want in your golden images deployed across the hybrid cloud. >> Wow, that is fantastic. And you might think that so we actually have more to show you. So thank you so much for that large. And Josh, that is fantastic. Looks like provisioning bread. Enterprise Clinic Systems ate a redhead. Enterprise Enterprise. Rhetta Enterprise Lennox. Eight Systems is Asian ever before, but >> we have >> more to talk to you about. And there's one thing that many of the operations professionals in this room right now no, that provisioning of'em is easy, but it's really day two day three, it's down the road that those viens required day to day maintenance. As a matter of fact, several you folks right now in this audience to have to manage hundreds, if not thousands, of virtual machines I recently spoke to. Gentleman has to manage thirteen hundred servers. So how do you manage those machines? A great scale. So great that they have now joined us is that it looks like they worked things out. So now I'm curious, Tim. How will we manage hundreds, if not thousands, of computers? >> Welbourne, one human managing hundreds or even thousands of'em says, No problem, because we have Ansel automation. And by leveraging Ansel's integration into satellite, not only can we spin up those V em's really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory and his red hat is publishing patches. Weaken with that danceable integration easily apply those patches across our entire fleet of machines. Okay, >> that is fantastic. So he's all the machines can get updated in one fell swoop. >> He sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud that red hat dot com And here, a cloud that redhead dot com You can view and manage your entire inventory no matter where it sits. Of Redhead Enterprise Lennox like on Prem on stage. Private Cloud or Public Cloud. It's true Hybrid cloud management. >> OK, but one thing. One thing. I know that in the minds of the audience right now. And if you have to manage a large number servers this it comes up again and again. What happens when you have those critical vulnerabilities that next zero day CV could be tomorrow? >> Exactly. I've actually been waiting for a while patiently for you >> to get to the really good stuff. So >> there's one more thing that I wanted to let folks know about. Red Hat Enterprise. The >> next eight and some features that we have there. Oh, >> yeah? What is that? >> So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our red hat Enterprise Lennox servers running securely, inefficiently. And so what we actually have here is a few things that we could take a look at show folks what that is. >> OK, so we basically have this new feature. We're going to show people right now. And so one thing I want to make sure it's absolutely included within the redhead enterprise in that state. >> Yes. Oh, that's Ah, that's an announcement that we're making this week is that this is a brand new feature that's integrated with Red Hat Enterprise clinics, and it's available to everybody that has a red hat enterprise like subscription. So >> I believe everyone in this room right now has a rail subscriptions, so it's available to all of them. >> Absolutely, absolutely. So let's take a quick look and try this out. So we actually have. Here is a list of about six hundred rules. They're configuration security and performance rules. And this is this list is growing every single day, so customers can actually opt in to the rules that are most that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt into sending us. So customers have opted in and are sending us more data every single night. Then they actually have in total over the last twenty years via any other mechanism. >> Now there's I see now there's some critical findings. That's what I was talking about. But it comes to CVS and things that nature. >> Yeah, I'm betting that those air probably some of the rail seven boxes that we haven't actually upgraded quite yet. So we get back to that. What? I'd really like to show everybody here because everybody has access to this is how easy it is to opt in and enable this feature for real. Okay, let's do that real quick, so I gotta hop back over to satellite here. This is the satellite that we saw before, and I'll grab one of the hosts and we can use the new Web console feature that's part of Railly, and via single sign on I could jump right from satellite over to the Web console. So it's really, really easy. And I'LL grab a terminal here and registering with insights is really, really easy. Is one command troops, and what's happening right now is the box is going to gather some data. It's going to send it up to the cloud, and within just a minute or two, we're gonna have some results that we can look at back on the Web interface. >> I love it so it's just a single command and you're ready to register this box right now. That is super easy. Well, that's fantastic, >> Brent. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Lennox eight was the easiest, most economical and smartest operating system on the planet, period. And well, I think it's cute how you can go ahead and captain on a single machine. I'm going to show you one more thing. This is Answerable Tower. You can use as a bell tower to managing govern your answerable playbook, usage across your entire organization and with this. What I could do is on every single VM that was spun up here today. Opt in and register insights with a single click of a button. >> Okay, I want to see that right now. I know everyone's waiting for it as well, But hey, you're VM is ready. Josh. Lars? >> Yeah. My clock is running a little late now. Yeah, insights is a really cool feature >> of rail. And I've got it in all my images already. All >> right, I'm doing it all right. And so as this playbook runs across the inventory, I can see the machines registering on cloud that redhead dot com ready to be managed. >> OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, fantastic. >> That's awesome. Thanks to him. Nothing better than a Red Hat Summit speaker in the first live demo going off script deal. Uh, let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular deanna's mask issue. It's going to affect a couple of machines. We saw that in the overview, and I can actually go and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood an impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use answerable to mediate this so I can grab the machines will select those two and we're mediate with answerable. I can create a new playbook. It's our maintenance window, but we'LL do something along the lines of like stuff Tim broke and that'LL be our cause. We name it whatever we want. So we'Ll create that playbook and take a look at it, and it's actually going to give us some details about the machines. You know what, what type of reboots Efendi you're going to be needed and what we need here. So we'LL go ahead and execute the playbook and what you're going to see is the outputs goingto happen in real time. So this is happening from the cloud were affecting machines. No matter where they are, they could be on Prem. They could be in a hybrid cloud, a public cloud or in a private cloud. And these things are gonna be remediated very, very easily with answerable. So it's really, really awesome. Everybody here with a red hat. Enterprise licks Lennox subscription has access to this now, so I >> kind of want >> everybody to go try this like, we really need to get this thing going and try it out right now. But >> don't know, sent about the room just yet. You get stay here >> for okay, Mr. Excitability, I think after this keynote, come back to the red hat booth and there's an optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out. Answer any questions you might have. So >> this is really the start of a new era with an intelligent operating system and beauty with intelligence you just saw right now what insights that troubles you. Fantastic. So we're enabling systems administrators to manage more red in private clinics, a greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point, and we kind of, you know, when a little bit sideways here moments. But we need to get off the stage. But there's one thing I want you guys to think about it. All right? Do come check out the in the booth. Like Tim just said also in our debs, Get hands on red and a prize winning state as well. But really, I want you to think about this one human and a multitude of servers. And if you remember that one thing asked you upfront. Do you feel like you get a new superpower and redhead? Is your force multiplier? All right, well, thank you so much. Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on stage. >> I went brilliant. No, it's just as always, >> amazing. I mean, as you can tell from last night were really, really proud of relate in that coming out here at the summit. And what a great way to showcase it. Thanks so much to you. Birth. Thanks, Brent. Tim, Lars and Josh. Just thanks again. So you've just seen this team demonstrate how impactful rail Khun b on your data center. So hopefully hopefully many of you. If not all of you have experienced that as well. But it was super computers. We hear about that all the time, as I just told you a few minutes ago, Lennox isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest super computers in the world. In our next guest is here to tell us a lot more about that. >> Please welcome Lawrence Livermore National Laboratory. HPC solution Architect Robin Goldstone. >> Thank you so much, Robin. >> So welcome. Welcome to the summit. Welcome to Boston. And thank thank you so much for coming for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high high performance computing really works at this level? >> Sure. So Lawrence Livermore National >> Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science and technology and high performance computing has always been one of our core capabilities. In fact, our very first supercomputer, ah Univac one was ordered by Edward Teller before our lab even opened back in nineteen fifty two. Our mission has evolved since then to cover a broad range of national security challenges. But first and foremost, our job is to ensure the safety, security and reliability of the nation's nuclear weapons stockpile. Oh, since the US no longer performs underground nuclear testing, our ability to certify the stockpile depends heavily on science based science space methods. We rely on H P C to simulate the behavior of complex weapons systems to ensure that they can function as expected, well beyond their intended life spans. That's actually great. >> So are you really are still running on that on that Univac? >> No, Actually, we we've moved on since then. So Sierra is Lawrence Livermore. Its latest and greatest supercomputer is currently the Seconds spastic supercomputer in the world and for the geeks in the audience, I think there's a few of them out there. We put up some of the specs of Syrah on the screen behind me, a couple of things worth highlighting our Sierra's peak performance and its power utilisation. So one hundred twenty five Pata flops of performance is equivalent to about twenty thousand of those Xbox one excess that you mentioned earlier and eleven point six megawatts of power required Operate Sierra is enough to power around eleven thousand homes. Syria is a very large and complex system, but underneath it all, it starts out as a collection of servers running Lin IX and more specifically, rail. >> So did Lawrence. Did Lawrence Livermore National Lab National Lab used Yisrael before >> Sierra? Oh, yeah, most definitely. So we've been running rail for a very long time on what I'll call our mid range HPC systems. So these clusters, built from commodity components, are sort of the bread and butter of our computer center. And running rail on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware. Also between Lawrence Livermore in our sister labs, Los Alamos and Sandia. Alongside these commodity clusters, though, we've always had one sort of world class supercomputer like Sierra. Historically, these systems have been built for a sort of exotic proprietary hardware running entirely closed source operating systems. Anytime something broke, which was often the Vander would be on the hook to fix it. And you know, >> that sounds >> like a good model, except that what we found overtime is most the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified codes. So their ability to reproduce our problem was was pretty limited. In some cases, they've even sent an engineer on site to try to reproduce our problems. But even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. >> So for many of us, for many of us, the challenges is one of driving reasons for open source, you know, for even open source existing. How has how did Sierra change? Things are on open source for >> you. Sure. So when we developed our technical requirements for Sierra, we had an explicit requirement that we want to run an open source operating system and a strong preference for rail. At the time, IBM was working with red hat toe add support Terrell for their new little Indian power architecture. So it was really just natural for them to bid a red. A rail bay system for Sierra running Raylan Cyril allows us to leverage the model that's worked so well for us for all this time on our commodity clusters any packages that we build for X eighty six, we can now build those packages for power as well as our market texture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in house colonel developers to help debug complex problems are sys. Admin is Khun now work on any of our systems, including Sierra, without having toe pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo es fenders. >> You know, you've been able, you've been able to extend your foundation from all the way from X eighty six all all the way to the extract excess Excuse scale supercomputing. We talk about giving customers all we talked about it all the time. A standard operational foundation to build upon. This isn't This isn't exactly what we've envisioned. So So what's next for you >> guys? Right. So what's next? So Sierra's just now going into production. But even so, we're already working on the contract for our next supercomputer called El Capitan. That's scheduled to be delivered the Lawrence Livermore in the twenty twenty two twenty timeframe. El Capitan is expected to be about ten times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation. That relish provided us for well over a decade. >> Well, thank you so much for your support of realm over the years, Robin. And And thank you so much for coming and tell us about it today. And we can't wait to hear more about El Capitan. Thank you. Thank you very much. So now you know why we're so proud of realm. And while you saw confetti cannons and T shirt cannons last night, um, so you know, as as burned the team talked about the demo rail is the force multiplier for servers. We've made Lennox one of the most powerful platforms in the history of platforms. But just as Lennox has become a viable platform with access for everyone, and rail has become viable, more viable every day in the enterprise open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with Ralph seeing the incredible progress of software development occurring around Lennox. Let's let's lead us to the next goal that we said tow, tow ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many? How many of you out here in the audience or are Cesar are? HC sees how many out there a lot. A lot. You are the people that our building the next generation of computing the hybrid cloud, you know, again with like just like our goals around Lennox. This goals might seem a little daunting in the beginning, but as a community we've proved it time and time again. We are unstoppable. Let's talk a bit about what got us to the point we're at right right now and in the work that, as always, we still have in front of us. We've been on a decade long mission on this. Believe it or not, this mission was to build the capabilities needed around the Lenox operating system to really build and make the hybrid cloud. When we saw well, first taking hold in the enterprise, we knew that was just taking the first step. Because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build, to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of J. Boss many years ago, all the way to the new Cuban Eddie's native code ready workspaces we launched just a few months back. We realized very early on that building a developer friendly platform was critical to the success of Lennox and open source in the enterprise. Shortly after this, the public cloud stormed onto the scene while our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. Rehl very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely re sought and re architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running in how they were running, regardless of which infrastructure provider they they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms to cloud forms, insights and now answerable. We focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked toe applications. Enterprise class applications tend to create tons of data and to have a common operating platform foyer applications. You need a storage solutions. That's Justus, flexible as that platform able to run on premise. Just a CZ. Well, as in the cloud, even across multiple clouds. This let us tow acquisitions like bluster, SEF perma bitch in Nubia, complimenting our Pratt platform with red hat storage for us, even though this sounds very condensed, this was a decade's worth of investment, all in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer would depend on to deploy riel hybrid cloud architectures, finding any finding an amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions. When that necessarily wasn't available by twenty fourteen, our foundation had expanded, but one big challenge remained workload portability. Virtual machine formats were fragmented across the various deployments and higher level framework such as Java e still very much depended on a significant amount of operating system configuration and then containers happened containers, despite having a very long being in existence for a very long time. As a technology exploded on the scene in twenty fourteen, Cooper Netease followed shortly after in twenty fifteen, allowing containers to span multiple locations and in one fell swoop containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the on ly practical reality in way for customers and a red hat. We've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you both our customers and our partners building critical realm in open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've and while we've amassed a pretty big amount of expertise to solve most any challenge in in any area that stack, it takes more than just our own learning's to build the next generation platform. Today we're also introducing open shit for which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply just with our customers in mind. This is something that could only be possible in an open source development model and just like relish the force multiplier for servers. Open shift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them its scale. And we've also announced open shift, and we've also announced azure open shift. Last night. Satya on this stage talked about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud, regardless of whether you run it yourself or just consume it as a service. And with this flagship release, we are also introducing operators, which is the central, which is the central feature here. We talked about this work last year with the operator framework, and today we're not going to just show you today. We're not going to just show you open shift for we're going to show you operators running at scale operators that will do updates and patches for you, letting you focus more of your time and running your infrastructure and running running your business. We want to make all this easier and intuitive. So let's have a quick look at how we're doing. Just that >> painting. I know all of you have heard we're talking to pretend to new >> customers about the travel out. So new plan. Just open it up as a service been launched by this summer. Look, I know this is a big quest for not very big team. I'm open to any and all ideas. >> Please welcome back to the stage. Red Hat Global director of developer Experience burst Sutter with Jessica Forrester and Daniel McPherson. All right, we're ready to do some more now. Now. Earlier we showed you read Enterprise Clinic St running on lots of different hardware like this hardware you see right now And we're also running across multiple cloud providers. But now we're going to move to another world of Lennox Containers. This is where you see open shift four on how you can manage large clusters of applications from eggs limits containers across the hybrid cloud. We're going to see this is where suffer operators fundamentally empower human operators and especially make ups and Deb work efficiently, more efficiently and effectively there together than ever before. Rights. We have to focus on the stage right now. They're represent ops in death, and we're gonna go see how they reeled in application together. Okay, so let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today, and he's telling my ops, comfort person Let's go to call him Mr Ops. So Dan, >> thanks for with open before, we had a much easier time setting up in maintaining our clusters. In large part, that's because open shit for has extended management of the clusters down to the infrastructure, the diversity kinds of parent. When you take >> a look at the open ship console, >> you can now see the machines that make up the cluster where machine represents the infrastructure. Underneath that Cooper, Eddie's node open shit for now handles provisioning Andy provisioning of those machines. From there, you could dig into it open ship node and see how it's configured and monitor how it's behaving. So >> I'm curious, >> though it does this work on bare metal infrastructure as well as virtualized infrastructure. >> Yeah, that's right. Burn So Pa Journal nodes, no eternal machines and open shit for can now manage it all. Something else we found extremely useful about open ship for is that it now has the ability to update itself. We can see this cluster hasn't update available and at the press of a button. Upgrades are responsible for updating. The entire platform includes the nodes, the control plane and even the operating system and real core arrests. All of this is possible because the infrastructure components and their configuration is now controlled by technology called operators. Thes software operators are responsible for aligning the cluster to a desired state. And all of this makes operational management of unopened ship cluster much simpler than ever before. All right, I >> love the fact that all that's been on one console Now you can see the full stack right all way down to the bare metal right there in that one console. Fantastic. So I wanted to scare us for a moment, though. And now let's talk to Deva, right? So Jessica here represents our all our developers in the room as my facts. He manages a large team of developers here Red hat. But more importantly, she represents our vice president development and has a large team that she has to worry about on a regular basis of Jessica. What can you show us? We'LL burn My team has hundreds of developers and were constantly under pressure to deliver value to our business. And frankly, we can't really wait for Dan and his ops team to provisioned the infrastructure and the services that we need to do our job. So we've chosen open shift as our platform to run our applications on. But until recently, we really struggled to find a reliable source of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us install through the cluster. But now, with operator, How bio, we're really seeing the V ecosystem be unlocked. And the technology's there. Things that my team needs, its databases and message cues tracing and monitoring. And these operators are actually responsible for complex applications like Prometheus here. Okay, they're written in a variety of languages, danceable, but that is awesome. So I do see a number of options there already, and preaches is a great example. But >> how do you >> know that one? These operators really is mature enough and robust enough for Dan and the outside of the house. Wilbert, Here we have the operator maturity model, and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application over time through different versions or all the way out to full auto pilot, where it's automatically scaling and tuning the application based on the current environment. And it's very cool. So coming over toothy open shift Consul, now we can actually see Dan has made the sequel server operator available to me and my team. That's the database that we're using. A sequel server. That's a great example. So cynics over running here in the cluster? But this is a great example for a developer. What if I want to create a new secret server instance? Sure, we're so it's as easy as provisioning any other service from the developer catalog. We come in and I can type for sequel server on what this is actually creating is, ah, native resource called Sequel Server, and you can think of that like a promise that a sequel server will get created. The operator is going to see that resource, install the application and then manage it over its life cycle, KAL, and from this install it operators view, I can see the operators running in my project and which resource is its managing Okay, but I'm >> kind of missing >> something here. I see this custom resource here, the sequel server. But where the community's resource is like pods. Yeah, I think it's cool that we get this native resource now called Sequel Server. But if I need to, I can still come in and see the native communities. Resource is like your staple set in service here. Okay, that is fantastic. Now, we did say earlier on, though, like many of our customers in the audience right now, you have a large team of engineers. Lost a large team of developers you gotta handle. You gotta have more than one secret server, right? We do one for every team as we're developing, and we use a lot of other technologies running on open shift as well, including Tomcat and our Jenkins pipelines and our dough js app that is gonna actually talk to that sequel server database. Okay, so this point we can kind of provisions, Some of these? Yes. Oh, since all of this is self service for me and my team's, I'm actually gonna go and create one of all of those things I just said on all of our projects, right Now, if you just give me a minute, Okay? Well, right. So basically, you're going to knock down No Jazz Jenkins sequel server. All right, now, that's like hundreds of bits of application level infrastructure right now. Live. So, Dan, are you not terrified? Well, I >> guess I should have done a little bit better >> job of managing guests this quota and historically just can. I might have had some conflict here because creating all these new applications would admit my team now had a massive back like tickets to work on. But now, because of software operators, my human operators were able to run our infrastructure at scale. So since I'm long into the cluster here as the cluster admin, I get this view of pods across all projects. And so I get an idea of what's happening across the entire cluster. And so I could see now we have four hundred ninety four pods already running, and there's a few more still starting up. And if I scroll to the list, we can see the different workloads Jessica just mentioned of Tomcats. And no Gs is And Jenkins is and and Siegel servers down here too, you know, I see continues >> creating and you have, like, close to five hundred pods running >> there. So, yeah, filters list down by secret server, so we could just see. Okay, But >> aren't you not >> running going around a cluster capacity at some point? >> Actually, yeah, we we definitely have a limited capacity in this cluster. And so, luckily, though, we already set up auto scale er's And so because the additional workload was launching, we see now those outer scholars have kicked in and some new machines are being created that don't yet have noticed. I'm because they're still starting up. And so there's another good view of this as well, so you can see machine sets. We have one machine set per availability zone, and you could see the each one is now scaling from ten to twelve machines. And the way they all those killers working is for each availability zone, they will. If capacities needed, they will add additional machines to that availability zone and then later effect fast. He's no longer needed. It will automatically take those machines away. >> That is incredible. So right now we're auto scaling across multiple available zones based on load. Okay, so looks like capacity planning and automation is fully, you know, handle this point. But I >> do have >> another question for year logged in. Is the cluster admin right now into the console? Can you show us your view of >> operator suffer operators? Actually, there's a couple of unique views here for operators, for Cluster admits. The first of those is operator Hub. This is where a cluster admin gets the ability to curate the experience of what operators are available to users of the cluster. And so obviously we already have the secret server operator installed, which which we've been using. The other unique view is operator management. This gives a cluster I've been the ability to maintain the operators they've already installed. And so if we dig in and see the secret server operator, well, see, we haven't set up for manual approval. And what that means is if a new update comes in for a single server, then a cluster and we would have the ability to approve or disapprove with that update before installs into the cluster, we'LL actually and there isn't upgrade that's available. Uh, I should probably wait to install this, though we're in the middle of scaling out this cluster. And I really don't want to disturb Jessica's application. Workflow. >> Yeah, so, actually, Dan, it's fine. My app is already up. It's running. Let me show it to you over here. So this is our products application that's talking to that sequel server instance. And for debugging purposes, we can see which version of sequel server we're currently talking to. Its two point two right now. And then which pod? Since this is a cluster, there's more than one secret server pod we could be connected to. Okay, I could see right there the bounder screeners they know to point to. That's the version we have right now. But, you know, >> this is kind of >> point of software operators at this point. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. Let's do it. Live here on stage. Right, then. All >> right. All right. I could see where this is going. So whenever you updated operator, it's just like any other resource on communities. And so the first thing that happens is the operator pot itself gets updated so we actually see a new version of the operator is currently being created now, and what's that gets created, the overseer will be terminated. And that point, the new, softer operator will notice. It's now responsible for managing lots of existing Siegel servers already in the environment. And so it's then going Teo update each of those sickle servers to match to the new version of the single server operator and so we could see it's running. And so if we switch now to the all projects view and we filter that list down by sequel server, then we should be able to see us. So lots of these sickle servers are now being created and the old ones are being terminated. So is the rolling update across the cluster? Exactly a So the secret server operator Deploy single server and an H A configuration. And it's on ly updates a single instance of secret server at a time, which means single server always left in nature configuration, and Jessica doesn't really have to worry about downtime with their applications. >> Yeah, that's awesome dance. So glad the team doesn't have to worry about >> that anymore and just got I think enough of these might have run by Now, if you try your app again might be updated. >> Let's see Jessica's application up here. All right. On laptop three. >> Here we go. >> Fantastic. And yet look, we're We're into two before we're onto three. Now we're on to victory. Excellent on. >> You know, I actually works so well. I don't even see a reason for us to leave this on manual approval. So I'm going to switch this automatic approval. And then in the future, if a new single server comes in, then we don't have to do anything, and it'll be all automatically updated on the cluster. >> That is absolutely fantastic. And so I was glad you guys got a chance to see that rolling update across the cluster. That is so cool. The Secret Service database being automated and fully updated. That is fantastic. Alright, so I can see how a software operator doesn't able. You don't manage hundreds if not thousands of applications. I know a lot of folks or interest in the back in infrastructure. Could you give us an example of the infrastructure >> behind this console? Yeah, absolutely. So we all know that open shift is designed that run in lots of different environments. But our teams think that as your redhead over, Schiff provides one of the best experiences by deeply integrating the open chief Resource is into the azure console, and it's even integrated into the azure command line toll and the easy open ship man. And, as was announced yesterday, it's now available for everyone to try out. And there's actually one more thing we wanted to show Everyone related to open shit, for this is all so new with a penchant for which is we now have multi cluster management. This gives you the ability to keep track of all your open shift environments, regardless of where they're running as well as you can create new clusters from here. And I'll dig into the azure cluster that we were just taking a look at. >> Okay, but is this user and face something have to install them one of my existing clusters? >> No, actually, this is the host of service that's provided by Red hat is part of cloud that redhead that calm and so all you have to do is log in with your red hair credentials to get access. >> That is incredible. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red update. Right and red embers. Thank Satan. Now we see it for multi cluster management. But home shift so you can fundamentally see. Now the suffer operators do finally change the game when it comes to making human operators vastly more productive and, more importantly, making Devon ops work more efficiently together than ever before. So we saw the rich ice vehicle system of those software operators. We can manage them across the Khyber Cloud with any, um, shift instance. And more importantly, I want to say Dan and Jessica for helping us with this demonstration. Okay, fantastic stuff, guys. Thank you so much. Let's get Paul back out here >> once again. Thanks >> so much to burn his team. Jessica and Dan. So you've just seen how open shift operators can help you manage hundreds, even thousands of applications. Install, upgrade, remove nodes, control everything about your application environment, virtual physical, all the way out to the cloud making, making things happen when the business demands it even at scale, because that's where it's going to get. Our next guest has lots of experience with demand at scale. and they're using open source container management to do it. Their work, their their their work building a successful cloud, First platform and there, the twenty nineteen Innovation Award winner. >> Please welcome twenty nineteen Innovation Award winner. Cole's senior vice president of technology, Rich Hodak. >> How you doing? Thanks. >> Thanks so much for coming out. We really appreciate it. So I guess you guys set some big goals, too. So can you baby tell us about the bold goal? Helped you personally help set for Cole's. And what inspired you to take that on? Yes. So it was twenty seventeen and life was pretty good. I had no gray hair and our business was, well, our tech was working well, and but we knew we'd have to do better into the future if we wanted to compete. Retails being disrupted. Our customers are asking for new experiences, So we set out on a goal to become an open hybrid cloud platform, and we chose Red had to partner with us on a lot of that. We set off on a three year journey. We're currently in Year two, and so far all KP eyes are on track, so it's been a great journey thus far. That's awesome. That's awesome. So So you Obviously, Obviously you think open source is the way to do cloud computing. So way absolutely agree with you on that point. So So what? What is it that's convinced you even more along? Yeah, So I think first and foremost wait, do we have a lot of traditional IAS fees? But we found that the open source partners actually are outpacing them with innovation. So I think that's where it starts for us. Um, secondly, we think there's maybe some financial upside to going more open source. We think we can maybe take some cost out unwind from these big fellas were in and thirdly, a CZ. We go to universities. We started hearing. Is we interviewed? Hey, what is Cole's doing with open source and way? Wanted to use that as a lever to help recruit talent. So I'm kind of excited, you know, we partner with Red Hat on open shift in in Rail and Gloucester and active M Q and answerable and lots of things. But we've also now launched our first open source projects. So it's really great to see this journey. We've been on. That's awesome, Rich. So you're in. You're in a high touch beta with with open shift for So what? What features and components or capabilities are you most excited about and looking forward to what? The launch and you know, and what? You know what? What are the something maybe some new goals that you might be able to accomplish with with the new features. And yeah, So I will tell you we're off to a great start with open shift. We've been on the platform for over a year now. We want an innovation award. We have this great team of engineers out here that have done some outstanding work. But certainly there's room to continue to mature that platform. It calls, and we're excited about open shift, for I think there's probably three things that were really looking forward to. One is we're looking forward to, ah, better upgrade process. And I think we saw, you know, some of that in the last demo. So upgrades have been kind of painful up until now. So we think that that that will help us. Um, number two, A lot of our open shift workloads today or the workloads. We run an open shifts are the stateless apse. Right? And we're really looking forward to moving more of our state full lapse into the platform. And then thirdly, I think that we've done a great job of automating a lot of the day. One stuff, you know, the provisioning of, of things. There's great opportunity o out there to do mohr automation for day two things. So to integrate mohr with our messaging systems in our database systems and so forth. So we, uh we're excited. Teo, get on board with the version for wear too. So, you know, I hope you, Khun, we can help you get to the next goals and we're going to continue to do that. Thank you. Thank you so much rich, you know, all the way from from rail toe open shift. It's really exciting for us, frankly, to see our products helping you solve World War were problems. What's you know what? Which is. Really? Why way do this and and getting into both of our goals. So thank you. Thank you very much. And thanks for your support. We really appreciate it. Thanks. It has all been amazing so far and we're not done. A critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure. We've been helping our customers do that in these environments. For almost twenty years now, we've been running the most complex work loads in the world. But you know, while the public cloud has opened up tremendous possibilities, it also brings in another type of another layer of infrastructure complexity. So what's our next goal? Extend your extend your data center all the way to the edge while being as effective as you have been over the last twenty twenty years, when it's all at your own fingertips. First from a practical sense, Enterprises air going to have to have their own data centers in their own environment for a very long time. But there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge. In fact, we talked about that very early on how technology advances in computer networking is storage are changing the physical boundaries of the data center every single day. The need, the need to process data at the source is becoming more and more critical. New use cases Air coming up every day. Self driving cars need to make the decisions on the fly. In the car factory processes are using a I need to adapt in real time. The factory floor has become the new edge of the data center, working with things like video analysis of a of A car's paint job as it comes off the line, where a massive amount of data is on ly needed for seconds in order to make critical decisions in real time. If we had to wait for the video to go up to the cloud and back, it would be too late. The damage would have already been done. The enterprise is being stretched to be able to process on site, whether it's in a car, a factory, a store or in eight or nine PM, usually involving massive amounts of data that just can't easily be moved. Just like these use cases couldn't be solved in private cloud alone because of things like blatant see on data movement, toe address, real time and requirements. They also can't be solved in public cloud alone. This is why open hybrid is really the model that's needed in the only model forward. So how do you address this class of workload that requires all of the above running at the edge? With the latest technology all its scale, let me give you a bit of a preview of what we're working on. We are taking our open hybrid cloud technologies to the edge, Integrated with integrated with Aro AM Hardware Partners. This is a preview of a solution that will contain red had open shift self storage in K V M virtual ization with Red Hat Enterprise Lennox at the core, all running on pre configured hardware. The first hardware out of the out of the gate will be with our long time. Oh, am partner Del Technologies. So let's bring back burn the team to see what's right around the corner. >> Please welcome back to the stage. Red Hat. Global director of developer Experience burst Sutter with Kareema Sharma. Okay, We just how was your Foreign operators have redefined the capabilities and usability of the open hybrid cloud, and now we're going to show you a few more things. Okay, so just be ready for that. But I know many of our customers in this audience right now, as well as the customers who aren't even here today. You're running tens of thousands of applications on open chef clusters. We know that disappearing right now, but we also know that >> you're not >> actually in the business of running terminators clusters. You're in the business of oil and gas from the business retail. You're in a business transportation, you're in some other business and you don't really want to manage those things at all. We also know though you have lo latest requirements like Polish is talking about. And you also dated gravity concerns where you >> need to keep >> that on your premises. So what you're about to see right now in this demonstration is where we've taken open ship for and made a bare metal cluster right here on this stage. This is a fully automated platform. There is no underlying hyper visor below this platform. It's open ship running on bare metal. And this is your crew vanities. Native infrastructure, where we brought together via mes containers networking and storage with me right now is green mush arma. She's one of her engineering leaders responsible for infrastructure technologies. Please welcome to the stage, Karima. >> Thank you. My pleasure to be here, whether it had summit. So let's start a cloud. Rid her dot com and here we can see the classroom Dannon Jessica working on just a few moments ago From here we have a bird's eye view ofthe all of our open ship plasters across the hybrid cloud from multiple cloud providers to on premises and noticed the spare medal last year. Well, that's the one that my team built right here on this stage. So let's go ahead and open the admin console for that last year. Now, in this demo, we'LL take a look at three things. A multi plaster inventory for the open Harbor cloud at cloud redhead dot com. Second open shift container storage, providing convert storage for virtual machines and containers and the same functionality for cloud vert and bare metal. And third, everything we see here is scuba unit is native, so by plugging directly into communities, orchestration begin common storage. Let working on monitoring facilities now. Last year, we saw how continue native actualization and Q Bert allow you to run virtual machines on Cabinet is an open shift, allowing for a single converge platform to manage both containers and virtual machines. So here I have this dark net project now from last year behead of induced virtual machine running it S P darknet application, and we had started to modernize and continue. Arise it by moving. Parts of the application from the windows began to the next containers. So let's take a look at it here. I have it again. >> Oh, large shirt, you windows. Earlier on, I was playing this game back stage, so it's just playing a little solitaire. Sorry about that. >> So we don't really have time for that right now. Birds. But as I was saying, Over here, I have Visions Studio Now the window's virtual machine is just another container and open shift and the i d be service for the virtual machine. It's just another service in open shift open shifts. Running both containers and virtual machines together opens a whole new world of possibilities. But why stop there? So this here be broadened to come in. It is native infrastructure as our vision to redefine the operation's off on premises infrastructure, and this applies to all matters of workloads. Using open shift on metal running all the way from the data center to the edge. No by your desk, right to main benefits. Want to help reduce the operation casts And second, to help bring advance good when it is orchestration concept to your infrastructure. So next, let's take a look at storage. So open shift container storage is software defined storage, providing the same functionality for both the public and the private lads. By leveraging the operator framework, open shift container storage automatically detects the available hardware configuration to utilize the discs in the most optimal vein. So then adding my note, you don't have to think about how to balance the storage. Storage is just another service running an open shift. >> And I really love this dashboard quite honestly, because I love seeing all the storage right here. So I'm kind of curious, though. Karima. What kind of storage would you What, What kind of applications would you use with the storage? >> Yeah, so this is the persistent storage. To be used by a database is your files and any data from applications such as a Magic Africa. Now the A Patrick after operator uses school, been at this for scheduling and high availability, and it uses open shift containers. Shortest. Restore the messages now Here are on premises. System is running a caf co workload streaming sensor data on DH. We want toe sort it and act on it locally, right In a minute. A place where maybe we need low latency or maybe in a data lake like situation. So we don't want to send the starter to the cloud. Instead, we want to act on it locally, right? Let's look at the griffon a dashboard and see how our system is doing so with the incoming message rate of about four hundred messages for second, the system seems to be performing well, right? I want to emphasize this is a fully integrated system. We're doing the testing An optimization sze so that the system can Artoo tune itself based on the applications. >> Okay, I love the automated operations. Now I am a curious because I know other folks in the audience want to know this too. What? Can you tell us more about how there's truly integrated communities can give us an example of that? >> Yes. Again, You know, I want to emphasize everything here is managed poorly by communities on open shift. Right. So you can really use the latest coolest to manage them. All right. Next, let's take a look at how easy it is to use K native with azure functions to script alive Reaction to a live migration event. >> Okay, Native is a great example. If actually were part of my breakout session yesterday, you saw me demonstrate came native. And actually, if you want to get hands on with it tonight, you can come to our guru night at five PM and actually get hands on like a native. So I really have enjoyed using K. Dated myself as a software developer. And but I am curious about the azure functions component. >> Yeah, so as your functions is a function is a service engine developed by Microsoft fully open source, and it runs on top of communities. So it works really well with our on premises open shift here. Right now, I have a simple azure function that I already have here and this azure function, you know, Let's see if this will send out a tweet every time we live My greater Windows virtual machine. Right. So I have it integrated with open shift on DH. Let's move a note to maintenance to see what happens. So >> basically has that via moves. We're going to see the event triggered. They trigger the function. >> Yeah, important point I want to make again here. Windows virtue in machines are equal citizens inside of open shift. We're investing heavily in automation through the use of the operator framework and also providing integration with the hardware. Right, So next, Now let's move that note to maintain it. >> But let's be very clear here. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. This is open ship running on bear. Meddle with these bare metal host. >> That is absolutely right. The system can automatically discover the bare metal hosts. All right, so here, let's move this note to maintenance. So I start them Internets now. But what will happen at this point is storage will heal itself, and communities will bring back the same level of service for the CAFTA application by launching a part on another note and the virtual machine belive my great right and this will create communities events. So we can see. You know, the events in the event stream changes have started to happen. And as a result of this migration, the key native function will send out a tweet to confirm that could win. It is native infrastructure has indeed done the migration for the live Ian. Right? >> See the events rolling through right there? >> Yeah. All right. And if we go to Twitter? >> All right, we got tweets. Fantastic. >> And here we can see the source Nord report. Migration has succeeded. It's a pretty cool stuff right here. No. So we want to bring you a cloud like experience, but this means is we're making operational ease a fuse as a top goal. We're investing heavily in encapsulating management knowledge and working to pre certify hardware configuration in working with their partners such as Dell, and they're dead already. Note program so that we can provide you guidance on specific benchmarks for specific work loads on our auto tuning system. >> All right, well, this is tow. I know right now, you're right thing, and I want to jump on the stage and check out the spare metal cluster. But you should not right. Wait After the keynote didn't. Come on, check it out. But also, I want you to go out there and think about visiting our partner Del and their booth where they have one. These clusters also. Okay, So this is where vmc networking and containers the storage all come together And a Kurban in his native infrastructure. You've seen right here on this stage, but an agreement. You have a bit more. >> Yes. So this is literally the cloud coming down from the heavens to us. >> Okay? Right here, Right now. >> Right here, right now. So, to close the loop, you can have your plaster connected to cloud redhead dot com for our insights inside reliability engineering services so that we can proactively provide you with the guidance through automated analyses of telemetry in logs and help flag a problem even before you notice you have it Beat software, hardware, performance, our security. And one more thing. I want to congratulate the engineers behind the school technology. >> Absolutely. There's a lot of engineers here that worked on this cluster and worked on the stack. Absolutely. Thank you. Really awesome stuff. And again do go check out our partner Dale. They're just out that door I can see them from here. They have one. These clusters get a chance to talk to them about how to run your open shift for on a bare metal cluster as well. Right, Kareema, Thank you so much. That was totally awesome. We're at a time, and we got to turn this back over to Paul. >> Thank you. Right. >> Okay. Okay. Thanks >> again. Burned, Kareema. Awesome. You know, So even with all the exciting capabilities that you're seeing, I want to take a moment to go back to the to the first platform tenant that we learned with rail, that the platform has to be developer friendly. Our next guest knows something about connecting a technology like open shift to their developers and part of their company. Wide transformation and their ability to shift the business that helped them helped them make take advantage of the innovation. Their Innovation award winner this year. Please, Let's welcome Ed to the stage. >> Please welcome. Twenty nineteen. Innovation Award winner. BP Vice President, Digital transformation. Ed Alford. >> Thanks, Ed. How your fake Good. So was full. Get right into it. What we go you guys trying to accomplish at BP and and How is the goal really important in mandatory within your organization? Support on everyone else were global energy >> business, with operations and over seventy countries. Andi. We've embraced what we call the jewel challenge, which is increasing the mind for energy that we have as individuals in the world. But we need to produce the energy with fuel emissions. It's part of that. One of our strategic priorities that we >> have is to modernize the whole group on. That means simplifying our processes and enhancing >> productivity through digital solutions. So we're using chlo based technologies >> on, more importantly, open source technologies to clear a community and say, the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the jewel challenge and actually try and help solve that problem. That's great. So So how did these heart of these new ways of working benefit your team and really the entire organ, maybe even the company as a whole? So we've been given the Innovation Award for Digital conveyor both in the way it was created and also in water is delivering a couple of guys in the audience poll costal and brewskies as he they they're in the team. Their teams developed that convey here, using our jail and Dev ops and some things. We talk about this stuff a lot, but actually the they did it in a truly our jail and develops we, um that enabled them to experiment and walking with different ways. And highlight in the skill set is that we, as a group required in order to transform using these approaches, we can no move things from ideation to scale and weeks and days sometimes rather than months. Andi, I think that if we can take what they've done on DH, use more open source technology, we contain that technology and apply across the whole group to tackle this Jill challenge. And I think that we use technologists and it's really cool. I think that we can no use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way. So So what's the next step for you guys at BP? So moving forward, we we are embracing ourselves, bracing a clothed, forced organization. We need to continue to live to deliver on our strategy, build >> over the technology across the entire group to address the jewel >> challenge and continue to make some of these bold changes and actually get into and really use. Our technology is, I said, too addresses you'LL challenge and make the future of our planet a better place for ourselves and our children and our children's children. That's that's a big goal. But thank you so much, Ed. Thanks for your support. And thanks for coming today. Thank you very much. Thank you. Now comes the part that, frankly, I think his best part of the best part of this presentation We're going to meet the type of person that makes all of these things a reality. This tip this type of person typically works for one of our customers or with one of with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about Maura the way more in the >> week. I think the thing I like most about it is you feel that reward Just helping people I mean and helping people with stuff you enjoy right with computers. My dad was the math and science teacher at the local high school. And so in the early eighties, that kind of met here, the default person. So he's always bringing in a computer stuff, and I started a pretty young age. What Jason's been able to do here is Mohr evangelize a lot of the technologies between different teams. I think a lot of it comes from the training and his certifications that he's got. He's always concerned about their experience, how easy it is for them to get applications written, how easy it is for them to get them up and running at the end of the day. We're a loan company, you know. That's way we lean on accounting like red. That's where we get our support front. That's why we decided to go with a product like open shift. I really, really like to product. So I went down. The certification are out in the training ground to learn more about open shit itself. So my daughter's teacher, they were doing a day of coding, and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class. The people that we have on our teams, like Jason, are what make us better than our competitors, right? Anybody could buy something off the shelf. It's people like him. They're able to take that and mold it into something that then it is a great offering for our partners and for >> customers. Please welcome Red Hat Certified Professional of the Year Jason Hyatt. >> Jason, Congratulations. Congratulations. What a what a big day, huh? What a really big day. You know, it's great. It's great to see such work, You know that you've done here. But you know what's really great and shows out in your video It's really especially rewarding. Tow us. And I'm sure to you as well to see how skills can open doors for for one for young women, like your daughters who already loves technology. So I'd liketo I'd like to present this to you right now. Take congratulations. Congratulations. Good. And we I know you're going to bring this passion. I know you bring this in, everything you do. So >> it's this Congratulations again. Thanks, Paul. It's been really exciting, and I was really excited to bring my family here to show the experience. It's it's >> really great. It's really great to see him all here as well going. Maybe we could you could You guys could stand up. So before we leave before we leave the stage, you know, I just wanted to ask, What's the most important skill that you'LL pass on from all your training to the future generations? >> So I think the most important thing is you have to be a continuous learner you can't really settle for. Ah, you can't be comfortable on learning, which I already know. You have to really drive a continuous Lerner. And of course, you got to use the I ninety. Maxwell. Quite. >> I don't even have to ask you the question. Of course. Right. Of course. That's awesome. That's awesome. And thank you. Thank you for everything, for everything that you're doing. So thanks again. Thank you. You know what makes open source work is passion and people that apply those considerable talents that passion like Jason here to making it worked and to contribute their idea there. There's back. And believe me, it's really an impressive group of people. You know you're family and especially Berkeley in the video. I hope you know that the redhead, the certified of the year is the best of the best. The cream of the crop and your dad is the best of the best of that. So you should be very, very happy for that. I also and I also can't wait. Teo, I also can't wait to come back here on this stage ten years from now and present that same award to you. Berkeley. So great. You should be proud. You know, everything you've heard about today is just a small representation of what's ahead of us. We've had us. We've had a set of goals and realize some bold goals over the last number of years that have gotten us to where we are today. Just to recap those bold goals First bait build a company based solely on open source software. It seems so logical now, but it had never been done before. Next building the operating system of the future that's going to run in power. The enterprise making the standard base platform in the op in the Enterprise Olympics based operating system. And after that making hybrid cloud the architecture of the future make hybrid the new data center, all leading to the largest software acquisition in history. Think about it around us around a company with one hundred percent open source DNA without. Throughout. Despite all the fun we encountered over those last seventeen years, I have to ask, Is there really any question that open source has won? Realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day in the Red Hat was born. But we only got to that goal because of you. Many of you contributors, many of you knew toe open source software and willing to take the risk along side of us and many of partners on that journey, both inside and outside of Red Hat. Going forward with the reach of IBM, Red hat will accelerate. Even Mohr. This will bring open source general innovation to the next generation hybrid data center, continuing on our original mission and goal to bring open source technology toe every corner of the planet. What I what I just went through in the last hour Soul, while mind boggling to many of us in the room who have had a front row seat to this overto last seventeen plus years has only been red hats. First step. Think about it. We have brought open source development from a niche player to the dominant development model in software and beyond. Open Source is now the cornerstone of the multi billion dollar enterprise software world and even the next generation hybrid act. Architecture would not even be possible without Lennox at the core in the open innovation that it feeds to build around it. This is not just a step forward for software. It's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined. We have. We have witnessed open source accomplished in the last seventeen years more than what most people will see in their career. Or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future. And in the one last thing to say, it's everybody in this room and beyond. Everyone outside continue the mission. Thanks have a great sum. It's great to see it
SUMMARY :
Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Kennedy setting the gold to the American people to go to the moon. that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. So it is an honor for me to be able to show it to you live on stage today. And we're not about the clinic's eight. And Morgan, There's windows. That means that for the first time, you can log in from any device Because that's the standard Lennox off site. I love the dashboard overview of the system, You see the load of the system, some some of its properties. So what about if I have to add a whole new application to this environment? Which the way for you to install different versions of your half stack that That is fantastic and the application streams Want to keep up with the fast moving ecosystems off programming I know some people were thinking it right now. everyone you want two or three or whichever your application needs. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, I've opened the storage space for you right here, where you see an overview of your storage. you know, we'll have another question for you. you know a lot of people, including me and people in the audience like that dark out right? much easier, including a post gra seeker and, of course, the python that we saw right there. Yeah, absolutely. And it's saved so that you don't actually have to know all the various incantations from Amazon I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily so you can really hit your Clyburn hybrid cloud operating system images. and I just need a few moments for it to build. So while that's taking a few moments, I know there's another key question in the minds of the audience right now, You see all my relate machines here, including the one I showed you what Consul on before. Okay, okay, so now it's progressing. it's progressing. live upgrade on stage. Detective that and you know, it doesn't run the Afghan cause we don't support operating that. So the good news is, we were protected from possible failed upgrade there, That's the idea. And I really love what you showed us there. So you were away for so long. So the really cool thing about this bird is that all of these images were built So thank you so much for that large. more to talk to you about. I'm going to show you here a satellite inventory and his So he's all the machines can get updated in one fell swoop. And there's one thing that I want to bring your attention to today because it's brand new. I know that in the minds of the audience right now. I've actually been waiting for a while patiently for you to get to the really good stuff. there's one more thing that I wanted to let folks know about. next eight and some features that we have there. So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate OK, so we basically have this new feature. So And this is this list is growing every single day, so customers can actually opt in to the rules that are most But it comes to CVS and things that nature. This is the satellite that we saw before, and I'll grab one of the hosts and I love it so it's just a single command and you're ready to register this box right now. I'm going to show you one more thing. I know everyone's waiting for it as well, But hey, you're VM is ready. Yeah, insights is a really cool feature And I've got it in all my images already. the machines registering on cloud that redhead dot com ready to be managed. OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, We saw that in the overview, and I can actually go and get some more details about what this everybody to go try this like, we really need to get this thing going and try it out right now. don't know, sent about the room just yet. And even though it's really easy to get going on and we kind of, you know, when a little bit sideways here moments. I went brilliant. We hear about that all the time, as I just told Please welcome Lawrence Livermore National Laboratory. And thank thank you so much for coming for But first and foremost, our job is to ensure the safety, and for the geeks in the audience, I think there's a few of them out there. before And you know, Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified open source, you know, for even open source existing. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo all the way to the extract excess Excuse scale supercomputing. share any more details about that system right now, but we are hoping that we're going to be able of the data center spread across so many multiple environments, management had to be I know all of you have heard we're talking to pretend to new customers about the travel out. Earlier we showed you read Enterprise Clinic St running on lots of In large part, that's because open shit for has extended management of the clusters down to the infrastructure, you can now see the machines that make up the cluster where machine represents the infrastructure. Thes software operators are responsible for aligning the cluster to a desired state. of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us has made the sequel server operator available to me and my team. Okay, so this point we can kind of provisions, And if I scroll to the list, we can see the different workloads Jessica just mentioned Okay, But And the way they all those killers working is Okay, so looks like capacity planning and automation is fully, you know, handle this point. Is the cluster admin right now into the console? This gives a cluster I've been the ability to maintain the operators they've already installed. So this is our products application that's talking to that sequel server instance. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. And that point, the new, softer operator will notice. So glad the team doesn't have to worry about that anymore and just got I think enough of these might have run by Now, if you try your app again Let's see Jessica's application up here. And yet look, we're We're into two before we're onto three. So I'm going to switch this automatic approval. And so I was glad you guys got a chance to see that rolling update across the cluster. And I'll dig into the azure cluster that we were just taking a look at. all you have to do is log in with your red hair credentials to get access. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red Thanks so much to burn his team. of technology, Rich Hodak. How you doing? center all the way to the edge while being as effective as you have been over of the open hybrid cloud, and now we're going to show you a few more things. You're in the business of oil and gas from the business retail. And this is your crew vanities. Well, that's the one that my team built right here on this stage. Oh, large shirt, you windows. open shift container storage automatically detects the available hardware configuration to What kind of storage would you What, What kind of applications would you use with the storage? four hundred messages for second, the system seems to be performing well, right? Now I am a curious because I know other folks in the audience want to know this too. So you can really use the latest coolest to manage And but I am curious about the azure functions component. and this azure function, you know, Let's see if this will We're going to see the event triggered. So next, Now let's move that note to maintain it. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. You know, the events in the event stream changes have started to happen. And if we go to Twitter? All right, we got tweets. No. So we want to bring you a cloud like experience, but this means is I want you to go out there and think about visiting our partner Del and their booth where they have one. Right here, Right now. So, to close the loop, you can have your plaster connected to cloud redhead These clusters get a chance to talk to them about how to run your open shift for on a bare metal Thank you. rail, that the platform has to be developer friendly. Please welcome. What we go you guys trying to accomplish at BP and and How is the goal One of our strategic priorities that we have is to modernize the whole group on. So we're using chlo based technologies And highlight in the skill part of this presentation We're going to meet the type of person that makes And so in the early eighties, welcome Red Hat Certified Professional of the Year Jason Hyatt. So I'd liketo I'd like to present this to you right now. to bring my family here to show the experience. before we leave before we leave the stage, you know, I just wanted to ask, What's the most important So I think the most important thing is you have to be a continuous learner you can't really settle for. And in the one last thing to say, it's everybody in this room and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam Ball | PERSON | 0.99+ |
Jessica | PERSON | 0.99+ |
Josh Boyer | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Timothy Kramer | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Lars Carl | PERSON | 0.99+ |
Kareema Sharma | PERSON | 0.99+ |
Wilbert | PERSON | 0.99+ |
Jason Hyatt | PERSON | 0.99+ |
Brent | PERSON | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
Rich Hodak | PERSON | 0.99+ |
Ed Alford | PERSON | 0.99+ |
ten | QUANTITY | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Daniel McPherson | PERSON | 0.99+ |
Jessica Forrester | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
Lars | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Robin | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Karima | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
seventy pounds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John F. Kennedy | PERSON | 0.99+ |
Ansel | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Edward Teller | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Teo | PERSON | 0.99+ |
Kareema | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Python | TITLE | 0.99+ |
seven individuals | QUANTITY | 0.99+ |
BP | ORGANIZATION | 0.99+ |
ten ten thousand times | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Chris | PERSON | 0.99+ |
Del Technologies | ORGANIZATION | 0.99+ |
python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Ashok Ramu, Actifio | Google Cloud Next 2019
>> fly from San Francisco. It's the Cube covering Google Cloud. Next nineteen, right Tio by Google Cloud and its ecosystem partners. >> Welcome back to Google Cloud next twenty nineteen Everybody, you're watching The Cube. The leader in live tech coverage. My name is Dave Volonte, and I'm here with my co host Stew Minutemen. John Ferrier is also here. Three days of wall to wall coverage of Google's Big Cloud Show customer event this day to a Shook Ramu is here is the vice president of Cloud and Customer Active Fio Boston based Great to see you again. Thanks for coming on to be here. So big show Active fio Category creator. Yeah, right. Yeah, drying it out. Battling in a very competitive space. Absolutely. Doing very well. Give us the update on what's going on with your company. So first >> to follow your super excited to be here Google next, right with one of the strategic partners for Google been working well in all departments. He had a great announcement. Today we announced active field goal for Global Bazaar SAS offering on it's dedicated to the Google platform. We want tohave the activity of experience be that much more better and easier for people running data sets anywhere, particularly in Google. So and Google has been one of our premier partners over the last, I would say three years or so we've gone from strength to strength, so very happy to be here and super excited to be launching this offering. You >> guys started active, Theo. It was clear you saw market beyond just back up beyond just insurance. You started to develop you populist copy data management. That term, everybody uses that today you sort of focused on other areas Dev offs, analytics and things of that nature. How is that gone? How is it resonated with customers? Where you getting the most traction today? >> So great question. I mean, it's gone really well, right? We've kind of been the leader, like you said, setting up the category and basically changing the way that it has looked at and being managed right data now, as a commodity is no longer a commodity. But it's an asset and we're kind of enabling companies to leverage that as it in many different ways on a cloud is here. Everybody wants to go to the cloud. Every customer we talked to every prospect we touch. Want to leverage Cloud And Google is coming in with a lot of strength, a lot of capabilities. So what we're building in terms of data transformation the data aware application of where technologies we have is a resonating very well. The devil of space we talked about, you know, is is the tip of the spear. For us, accounts are over seventy percent of our business, you know, And the last I checked, over sixty to seventy percent of our customers are leveraging cloud in some form. I'd be for Del Ops, cloud bursting D r and all of those categories and, you know, having a very strong enterprise. DNA makes his deal with scale very easily take complex applications and make it look simple. And that's been our strength for the past nine years. So we continue to in a way that strengthen work with Google to make the platform even more stronger. >> When, when I think back of those early days you said enterprise architect her it was like, Okay, let me understand that architecture, the building blocks, you know, the software i p that you have, but it's been quite a different discussion I've been having with your your team the last couple of years. Because, as you say, cloud is front and center and not surprising. To hear the devil is a big piece of help. Help us update kind of that journey. And, you know, a full SAS offering today. How you got from kind of the origin to the company, too, You know, a sass offering. Sure, >> right. I mean, we always knew we had a phenomenal product, right? And a phenomenal customers. We have a number of fourteen thousand two thousand customers with us. And you know what we realized is the adoption off. You know, to understand how cloud works and understand how customers can easily manage to cloud, the experience becomes much more important on. So the SAS offering is more about how do you experience the same great active Your technology with the push button is of use. So we enable the implementation installation ingestion of data in a minute. So by the time you're done with the whole process, you're already starting to love respect If your technology in the closet, your choice. An active field goal for Google. Particularly targets ASAP. Hana Sequel and other complex workload. So these workloads are traditionally been in a very infrastructure heavy, very people heavy in terms of managing. And what we've done is to radically transform how you manage those worthless. A lot of organizations and the conversations I've had over the last twenty four hours has been Hana this and Hannah that How do I make on a simple I've heard active you is the way to go for managing a safety. Hannah, how do you guys tackle it? And this is very interesting conversations with a lot of thought leaders who help us not only build a better product at all, it'll be improve the experience that they take it from there. So that's how I I would see the transformation for the company. >> Why? Why is active field make Hana simple? What is it specifically about? You guys >> don't differentiate. You think the great question. So Hana in general has been a very complicated, hard to install, hard to hard to hard to manage application. So what active you brings in is native application technology, right? So we don't go after infrastructure. We don't go after just storage. But we look at the application of the hole. So when you talk application down, we learn the application. We figure out how it works, how it works best, and how does the best way to capture it and present data back, which is what it's all about. And when you start from there, it's a hard problem to tackle, so it takes a little bit of time for us to tackle that problem. But when the solution comes out, it works one way across all platforms. So we've had customers moving data from on crime to the cloud, and they don't see a difference. They used to go left. Now they go right. But as part of the application to thin works, it works the same way a developer, using Hannah is using Hannah the same way yesterday that he was today. Because even though the databases moved from on creme of the club, so that transformation requires the level of abstraction and understanding the application that we have automated and building your engine >> okay, The hard question for data protection data managed folks today is how are you attacking SAS? Most companies that we asked that question, too, is that his roadmap roadmap Maybe that case for you too. But what is your strategy with regard to sass? Because something triggered me when you talked about the application yet and I know Ash knows background systems view application view has always been his expertise, your company's expertise. How eyes that opportunity for you guys. Is it one that you're actually actively pursuing? If so explain. If not, why not? Is it on the road map? >> So it's certainly an opportunity of pursuing and, you know, working with a number of sass vendors to figure out again a sense of, you know, where is the critical data mass? SAS is a number of components toe and essence off. Any particular application is you know, where is the workload? What is the state machine and how do you manage it? That's the key element. And once you tackle that, the fast application is like any other applications. So we have, you know, people working with us to build custom connectors for, like, office three, sixty five and other other elements of sass products. So as time of walls, you'LL see us, we'LL start working. We'Ll have announcements for the Cloud sequel and other Google platform of the service offerings. Amazon Rd s Those offerings are coming, and we will be basically building the platform. And once the platform comes just like active you has done, we will tackle the SAS applications. One >> of the first technical challenge. It's Roma business challenges. >> It's a business challenge. And you know, for us we have to focus on where the customers want to go, where the enterprise customers wanna go. And Stass at this point is, I would say, emerging to be a place where Enterprise wants to adopt it out of scale that they want adopted. So we're certainly focusing on that. >> And I think there's a perception to stew that, well, the SAS vendor there in the cloud, they got my data protected so good. >> Yeah, well, we know that's not the case that they need to worry about that. >> And I said, I said protected and that's not fair to you guys because >> I was a little, >> much wider scale. >> So But, you know, we were talking about ASAP, and we've watched some of these, you know, big tough application, and they're moving to the clouds. There's a lot of choices out there. You've announcement specifically about Google. What can you tell us about why customers are choosing Google? And if you have any stories about joint Google customers that you have love, >> I would say, Let's start off. You know, I would thank Google because it's one of the key partners for us. You've done over many, many million dollars last year, and we want to double the number of this year right on. It's been all the way from companies that have fifteen to twenty PM's two companies that have twenty thousand, so it spans the gamut. You know, from an infrastructure perspective, Google is the best of the brief. Nobody knows infrastructure computer memory better than Google. Nobody knows networking better than Google. Nobody knows security better than moving. So these are the choices. Why Enterprises? Now we're saying OK, Google is a choice. And as I see on the field flow today, last year was, I have a project. Maybe gold this year is how do I do ABC with gold So the conversations have shifted off. Should I do Google? Worse is how do I do ABC with Google and then you marry active use technology, which is infrastructure agnostic we don't care their application runs. And with that mantra you marry that Google infrastructure. It creates a very powerful combination for enterprises to adopt. >> So just as the follow ups that when we talk to customers here, multi cloud is the reality. So how does that play into your story? And where do you see that fit? >> We were always built multi cloud. So right from day one active use platform architecture Everything has been infrastructure diagnostic. So when you build something for Veum, where or Amazon it works as is in group. And with the latest capabilities on Claude Mobility that be announced a few months ago, you Khun move data seamlessly between different cloud platforms. In fact, I've just chosen in active field Iran be its de facto data protection platforms on all my old life. So you could hear. I know activity also being supporter Nolly Cloud s so that we'll be the only floor platform that is the golden standard to protect complex works lords like a safety nets. >> You mentioned you have a team in in Hyderabad. What? What are they working on? Is it sort of part of the broader development team? Your cloud Focus, Google Focus. What's >> the team in Hyderabad is very much integrated to our engineering team out of Boston. So, you know, they're basically equivalent. We all work together collaboratively. The talent in Hyderabad is now building a lot off our cloud technologies. And the spell is the emerging Technologies s. So we've been able to staff up a very strong team instead of very strong partner. Seems to kind of help us argument what we have here. So leave. Leaders here are basically leveraging. The resource is in Hyderabad kind of accelerate the development because, like, you know, there's never started to work. >> Okay, so you're following the sun and that and that and that the talent pool in that part of India has really exploded. You've seen that big companies hold all the club providers All the all the new ride share companies for their war for talent. Isn't there exactly good? So talk road map a little bit. What could we expect going forward, You know, show us a little leg, if you would. >> So you can see a lot more announcements around activity ago for Google will be enhancing the experience around, you know, adapting and ingesting ASAP and sequel, etcetera. You'LL be looking at a lot of our SAS integration offerings that are coming out. You talk about obviously sixty five Cloud Sequel Amazon RD s Things like that. We'LL have a migration sweet to talk about. How do you How do you ingest and manage communities? Containers? Because that's becoming a commonplace today, Right? How do you How do you tackle complex container in nine minutes? Micro Services. That's a maybe a focus for us and continue to, you know, build and integrate further into the application ecosystem. Because these applications not getting simpler ASAP is continuing to build more complex applications. How do you tackle that? The words road map and keep up with it. That's going to be what we going to be focusing on. >> So active Diogo. We talked about that a little bit. That's announcement here. That's that's your hard news. Yes, it's went to chipping, and once it available >> to go, it's a sass offering, so there's nothing to ship you know so well. Actual SAS pricing model. It's an actual SAS pricing model, fast offering one click purchase. Was it busy installed? So yes, >> Stewie's laughing because so many sass is, aren't a cloud pricing >> three years but only grow up? Can still nod. >> It's not an entity for reporting. It's not an entity that just gives you a bunch of glamour screens. It is actually taking your Hannah workloads and giving it to you for data protection, backup, disaster recovery. So it is. It is true active feel, the time test addictive you and a price product now being off for this test. So >> and how are you going to market with that product? >> So we have a number of vendors, this fellow's Kugel partners here. I get work with them to tow and to kind of generate the man and awareness. So this has been in works for over six months now, So it's not something that came out of the blue, and we've been working with Google in formulating the roadmap. For us, it is >> the active ecosystem looking like these days. How is that evolving? >> It's it's it's It's, um I would say, you know, the customers are the front and center of our ecosystem. We've always built a company with customers first mentality, and they drive a lot of our innovation because They give us a lot of requirements. They reach us in different angle. So they've helped us push the cloud road map. They've helped us push to the point where they want faster adoption. Is that adoption? And that's kind of where we're going, how the ecosystem is now still around enterprises. But the enterprise is tryingto innovate themselves because now data is that will be available. Eso abject with large financial institutions. GDP are so these are all the requirements and they're throwing at us. Okay, you can manage data. How do you air gap it? How do you work with object storage? How do you work with different kinds of technologies? They wanna work with us. And, you know, we've always stepped up to the plate saying, Sure, if it's a new piece of technology that we feel is viable and has the road map will jump at it and solve the problem with you. And that's always been the way of you the partner and growing the company >> you mentioned Air Gap. Some we haven't talked about this week is ransom. Where we talk about most most conferences. It's it's one of those unpleasant things that's a tailwind for companies like >> bank. Right. And we have an offering on ransomware rights. If you look at cyber resiliency, we're the only product in town Where and if you're hit by Ransomware, you can instantly the cover and say, Oh, my ransom or hit me on the seventeenth January, anything after that is gone. But at least I can get to seventy the January and sought my business up. Otherwise, everything else every other product out that this will take weeks or months to figure it out. So, you know, that's another type of a solution that came up. Not there, not there. Not happy about handsome. Where? But that does happen. So we have a solution for the problem. >> Thanks so much for coming in the cubes. Have you >> happy to be here? >> So we'LL see you back in Boston. All right, All right. Thanks. Thanks for watching everybody, This is the cube Will be here tomorrow Day three Student A mandate Volante and John Furrier Google Next Cloud Big Cloud Show We'LL See you tomorrow. Thanks for watching
SUMMARY :
It's the Cube covering based Great to see you again. So and Google has been one of our premier partners over the last, You started to develop you populist copy data management. The devil of space we talked about, you know, Okay, let me understand that architecture, the building blocks, you know, the software i p that you have, on. So the SAS offering is more about how do you experience the same great active Your technology So what active you brings in is native companies that we asked that question, too, is that his roadmap roadmap Maybe that case for you too. So we have, you know, people working with us to build custom connectors for, of the first technical challenge. And you know, for us we have to focus on where the customers want to go, And I think there's a perception to stew that, well, the SAS vendor there in the cloud, So But, you know, we were talking about ASAP, and we've watched some of these, you know, Worse is how do I do ABC with Google and then you marry active use technology, And where do you see that fit? So when you build You mentioned you have a team in in Hyderabad. like, you know, there's never started to work. What could we expect going forward, You know, show us a little leg, if you would. So you can see a lot more announcements around activity ago for Google will be enhancing the experience So active Diogo. to go, it's a sass offering, so there's nothing to ship you know so well. three years but only grow up? It's not an entity that just gives you a bunch of glamour screens. So we have a number of vendors, this fellow's Kugel partners here. the active ecosystem looking like these days. the way of you the partner and growing the company Where we talk about most most conferences. So, you know, that's another type of a solution Have you So we'LL see you back in Boston.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Erik Kaulberg | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Jason Chamiak | PERSON | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Marty Martin | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Greg Muscurella | PERSON | 0.99+ |
Erik | PERSON | 0.99+ |
Melissa | PERSON | 0.99+ |
Micheal | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Michael Nicosia | PERSON | 0.99+ |
Jason Stowe | PERSON | 0.99+ |
Sonia Tagare | PERSON | 0.99+ |
Aysegul | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Prakash | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Bruce Linsey | PERSON | 0.99+ |
Denice Denton | PERSON | 0.99+ |
Aysegul Gunduz | PERSON | 0.99+ |
Roy | PERSON | 0.99+ |
April 2018 | DATE | 0.99+ |
August of 2018 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
April of 2010 | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Devin Dillon | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Scott | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
Alan Clark | PERSON | 0.99+ |
Paul Galen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Jamcracker | ORGANIZATION | 0.99+ |
Tarek Madkour | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
Anita | PERSON | 0.99+ |
1974 | DATE | 0.99+ |
John Ferrier | PERSON | 0.99+ |
12 | QUANTITY | 0.99+ |
ViaWest | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
James Hamilton | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
$10 million | QUANTITY | 0.99+ |
December | DATE | 0.99+ |
Greg Hughes, Veritas | Veritas Vision Solution Day NYC 2018
>> From Tavern on the Green in Central Park, New York, it's theCUBE, covering Veritas Vision Solution Day. Brought to you by Veritas. (robotic music) >> We're back in the heart of Central Park. We're here at Tavern on the Green. Beautiful location for the Veritas Vision Day. You're watching theCUBE, my name is Dave Vellante. We go out to the events, we extract the signal from the noise, we got the CEO of Veritas here, Greg Hughes, newly minted, nine months in. Greg, thanks for coming on theCUBE. >> It's great to be here Dave, thank you. >> So let's talk about your nine. What was your agenda your first nine months? You know they talk about the 100 day plan. What was your nine month plan? >> Yeah, well look, I've been here for nine months, but I'm a boomerang. So I was here from 2003 to 2010. I ran all of global services, during that time and became the chief strategy officer after that. Was here during the merger by Semantic. And then ran the Enterprise Product Group. So I had all the products and all the engineering teams for all the Enterprise products. And really my starting point is the customer. I really like to hear directly from the customer. So I've spent probably 50% of my time out and about, meeting with customers. And at this point, I've met with a 100 different accounts all around the world. And what I'm hearing, makes me even more excited to be here. Digital transformation is real. These customers are investing a lot in digitizing their companies. And that's driving an explosion of data. That data all needs to be available and recoverable and that's where we step in. We're the best at that. >> Okay, so that was sort of alluring to you. You're right, everybody's trying to get digital transformation right. It changes the whole data protection equation. It kind of reminds me, in a much bigger scale, of virtualization. You remember, everybody had to rethink their backup strategies because you now have less physical resources. This is a whole different set of pressures, isn't it? It's like you can't go down, you have to always have access to data. Data is-- >> 24 by seven. >> Increasingly valuable. >> Yup. >> So talk a little bit more about the importance of data, the role of data, and where Veritas fits in. >> Well, our customers are using new, they're driving new applications throughout the enterprise. So machine learning, AI, big data, internet of things. And that's all driving the use of new data management technologies. Cassandra, Hadoop, Open Sequel, MongoDB. You've heard all of these, right? And then that's driving the use of new platforms. Hyper-converged, virtual machines, the cloud. So all this data is popping up in all these different areas. And without Veritas, it can exist, it'll just be in silos. And that becomes very hard to manage and protect it. All that data needs to be protected. We're there to protect everything. And that's really how we think about it. >> The big message we heard today was you got a lot of different clouds, you don't want to have a different data protection strategy for each cloud. So you've got to simplify that for people. Sounds easy, but from an R&D perspective, you've got a large install base, you've been around for a long, long time. So you've got to put investments to actually see that through. Talk about your R&D and investment strategy. >> Well, our investment strategy's very simple. We are the market share leader in data protection and software-defined storage. And that scale, gives us a tremendous advantage. We can use that scale to invest more aggressively than anybody else, in those areas. So we can cover all the workloads, we can cover wherever our customers are putting their data, and we can help them standardize on one provider of data protection, and that's us. So they don't have to have the complexity of point products in their infrastructure. >> So I wonder if we could talk, just a little veer here, and talk about the private equity play. You guys are the private equity exit. And you're seeing a lot of high profile PE companies. It used to be where companies would go to die, and now it's becoming a way for the PE guys to actually get step-ups, and make a lot of money by investing in companies, and building communities, investing in R&D. Some of the stuff we've covered. We've followed Syncsort, BMC, Infor, a really interesting company, what's kind of an exit from PE, right? Dell, the biggest one of all. Riverbed, and of course Veritas. So, there's like a new private equity playbook. It's something you know well from your Silver Lake days. Describe what that dynamic is like, and how it's changed. >> Oh look, private equity's been involved in software for 10 or 15 years. It's been a very important area of investment in private equity. I've worked for private equity firms, worked for software companies, so I know it very well. And the basic idea is, continue the investment. Continue in the investment in the core products and the core customers, to make sure that there is continued enhancement and innovation, of the core products. With that, there'll be continuity in customer relationships, and those customer relationships are very valuable. That's really the secret, if you will, of the private equity playbook. >> Well and public markets are very fickle. I mean, they want growth now. They don't care about profits. I see you've got a very nice cash flow, you and some of the brethren that I mentioned. So that could be very attractive, particularly when, you know, public markets they ebb and flow. The key is value for customers, and that's going to drive value for shareholders. >> That's absolutely right. >> So talk about the TAM. Part of a CEOs job, is to continually find new ways, you're a strategy guy, so TAM expansion is part of the role. How do you look at the market? Where are the growth opportunities? >> We see our TAM, or our total addressable market, at being around $17 billion, cutting across all of our areas. Probably growing into high single digits, 8%. That's kind of a big picture view of it. When I like to think about it, I like to think about it from the themes I'm hearing from customers. What are our customers doing? They're trying to leverage the cloud. Most of our customers, which are large enterprises. We work with the blue-chip enterprises on the planet. They're going to move to a hybrid approach. They're going to on-premise infrastructure and multiple cloud providers. So that's really what they're doing. The second thing our customers are worried about is ransomware, and ransomware attacks. Spearfishing works, the bad guys are going to get in. They're going to put some bad malware in your environment. The key is to be resilient and to be able to restore at scale. That's another area of significant investment. The third, they're trying to automate. They're trying to make investments in automation, to take out manual labor, to reduce error rate. In this whole world, tape should go away. So one of the things our customers are doing, is trying to get rid of tape backup in their environment. Tape is a long-term retention strategy. And then finally, if you get rid of tape, and you have all your secondary data on disc or in the cloud, what becomes really cool, is you can analyze all that data. Out of bound, from the primary storage. That's one of the bigger changes I've seen since I've returned back to Veritas. >> So $17 billion, obviously, that transcends backup. Frankly, we go back to the early days of Veritas, I always thought of it as a data management company and sort of returned to those roots. >> Backup, software defined storage, compliance, all those areas are key to what we do. >> You mentioned automation. When you think about cloud and digital transformation, automation is fundamental, we had NBCUniversal on earlier, and the customer was talking about scripts and how scripts are fragile and they need to be maintained and it doesn't scale. So he wants to drive automation into his processes as much as possible, using a platform, a sort of API based, modern, microservices, containers. Kind of using all those terms. What does that mean for you guys in terms of your R&D roadmap, in terms of the investments that you're making in those types of software innovations? >> Well actually one of the things we're talking about today is our latest release of NetBackup 812, which had a significant investment in APIs and that allow our customers to use the product and automate processes, tie it together with their infrastructure, like ServiceNow, or whatever they have. And we're going to continue full throttle on APIs. Just having lunch with some customers just today, they want us to go even further in our APIs. So that's really core to what we're doing. >> So you guys are a little bit like the New England Patriots. You're the leader, and everybody wants to take you down. So you always start-- >> Nobody's confused me for Tom Brady. Although my wife looks... I'll stack her up against Giselle anytime, but I'm no Tom Brady. >> So okay, how do you maintain your leadership and your relevance for customers? A lot of VC money coming into the marketplace. Like I said, everybody wants to take the leader down. How do you maintain your leadership? >> We've been around for 25 years. We're very honored to have 95% of the Fortune 100, are our customers. If you go to any large country in the world it's very much like that. We work with the bluest of blue-chips, the biggest companies, the most complex, the most demanding (chuckling), the most highly regulated. Those are our customers. We steer the ship based on their input, and that's why we're relevant. We're listening to them. Our customer's extremely relevant. We're going to help them protect, classify, archive their data, wherever it is. >> So the first nine months was all about hearing from customers. So what's the next 12 to 18 months about for you? >> We're continuing to invest, delighted to talk about partnerships, and where those are going, as well. I think that's going to be a major emphasis of us to continue to drive our partnerships. We can't do this alone. Our customers use products from a variety of other players. Today we had Henry Axelrod, from Amazon Web Services, here talking about how we're working closely with Amazon. We announced a really cool partnership with Pure Storage. Our customers that use Pure Storage's all-flash arrays, they know their data's backed up and protected with Veritas and with NetBackup. It's continually make sure that across this ecosystem of partners, we are the one player that can help our large customers. >> Great, thank you for mentioning that ecosystem is a key part of it. The channel, that's how you continue to grow. You get a lot of leverage out of that. Well Greg, thanks very much for coming on theCUBE. Congratulations on your-- >> Dave, thank you. >> On the new role. We are super excited for you guys, and we'll be watching. >> I enjoyed it, thank you. >> All right. Keep it right there everybody we'll be back with our next guest. This is Dave Vellante, we're here in Central Park. Be right back, Veritas Vision, be right back. (robotic music)
SUMMARY :
Brought to you by Veritas. We're back in the So let's talk about your nine. and became the chief It changes the whole about the importance of data, And that's all driving the use to actually see that through. So they don't have to have the complexity and talk about the private equity play. and innovation, of the core products. and that's going to drive So talk about the TAM. So one of the things and sort of returned to those roots. all those areas are key to what we do. and the customer was talking about scripts So that's really core to what we're doing. like the New England Patriots. for Tom Brady. into the marketplace. of the Fortune 100, are our customers. So the first nine months We're continuing to invest, You get a lot of leverage out of that. On the new role. This is Dave Vellante,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Greg Hughes | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
2003 | DATE | 0.99+ |
Syncsort | ORGANIZATION | 0.99+ |
Greg | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
$17 billion | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Tom Brady | PERSON | 0.99+ |
nine months | QUANTITY | 0.99+ |
8% | QUANTITY | 0.99+ |
Henry Axelrod | PERSON | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
Semantic | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Giselle | PERSON | 0.99+ |
NBCUniversal | ORGANIZATION | 0.99+ |
Veritas Vision | ORGANIZATION | 0.99+ |
TAM | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
New England Patriots | ORGANIZATION | 0.99+ |
Infor | ORGANIZATION | 0.99+ |
100 day | QUANTITY | 0.99+ |
100 different accounts | QUANTITY | 0.99+ |
Silver Lake | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
around $17 billion | QUANTITY | 0.99+ |
first nine months | QUANTITY | 0.99+ |
Central Park | LOCATION | 0.99+ |
third | QUANTITY | 0.99+ |
Veritas Vision Day | EVENT | 0.98+ |
Today | DATE | 0.98+ |
nine month | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
nine | QUANTITY | 0.98+ |
each cloud | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Pure Storage | ORGANIZATION | 0.97+ |
Enterprise Product Group | ORGANIZATION | 0.97+ |
second thing | QUANTITY | 0.97+ |
seven | QUANTITY | 0.96+ |
one player | QUANTITY | 0.96+ |
NetBackup 812 | TITLE | 0.95+ |
Vision Solution Day | EVENT | 0.95+ |
18 months | QUANTITY | 0.94+ |
12 | QUANTITY | 0.94+ |
Central Park, New York | LOCATION | 0.93+ |
25 years | QUANTITY | 0.92+ |
Cassandra | PERSON | 0.91+ |
ServiceNow | TITLE | 0.91+ |
Tavern on the Green | LOCATION | 0.9+ |
NYC | LOCATION | 0.84+ |
Hadoop | PERSON | 0.82+ |
one provider | QUANTITY | 0.81+ |
PE | ORGANIZATION | 0.75+ |
NetBackup | ORGANIZATION | 0.73+ |
Veritas Vision Solution Day | EVENT | 0.72+ |
theCUBE | ORGANIZATION | 0.68+ |
2018 | EVENT | 0.6+ |
MongoDB | TITLE | 0.58+ |
Open Sequel | ORGANIZATION | 0.57+ |
Riverbed | ORGANIZATION | 0.55+ |
Fortune | ORGANIZATION | 0.48+ |
100 | QUANTITY | 0.28+ |
Rod Lappin, Lenovo & Najaf Husain, Cloudistics, Inc. | Lenovo Transform 2018
(upbeat music) >> Live, from New York City, it's theCUBE! Covering Lenovo Transform 2.0, brought to you by Lenovo! >> Welcome back to theCUBE's live coverage of Lenovo Transform. I'm your host, Rebecca Knight, along with my co-host, Stu Miniman. We've got two guests on the show right now, we've got Naj Husain, the CEO of Cloudistics, and we're welcoming back Rod Lappin, who is the Senior Vice President of Sales and Marketing here at Lenovo. Thanks so much for coming on the show. >> Thank you very much. >> Great to be here. Nice to meet you. >> So... >> Rod, why are you so lazy at this show? MCing the show, on theCUBE twice... (laughter) >> I know, it's been an exciting day, hasn't it? I've actually done a few meetings between I saw you last time as well so, I'm living on 2 and a half hours of sleep last night, and I'm running hot so I'm looking forward to a drink at the end of the night. >> Yeah, a well-deserved drink. >> Sleep fast, sleep fast. >> Exactly. So I want to start with you Naj. >> Yeah. >> Tell our viewers a little bit about Cloudistics, it's based in western Virginia, what do you do? >> Yeah, so we build a private cloud with a premium experience. We founded the company in 2013 on the idea of simplifying infrastructures. In our previous world, we actually lived the problem. So in our previous company, we actually took Amazon as an analog, and tried to move our resources at Amazon and simplify it. Because we were tired of managing complex infrastructure. So as a company of 500 people and a software development company, we wanted to simplify our world. So we went to Amazon, we spent 3 months or so developing codes of implement QA. Great, real simple. Don't have to worry about hardware at all. It's a great value proposition. All of a sudden we started to implement this thing, after month one it was 100,000 bucks. After month two, it was 150,000 a month. After month three, we're creating the 200,000 a month in fees to run Amazon, right. While the value proposition's all about "simplicity is awesome." The problem is, it's very very expensive. So as the company of 500 like I said, we had to figure out what to do next. So I spoke to my CTO and I said, how can we solve this problem? So he said, okay I can bring the infrastructure back on prep, great. So he priced that out, it cost less than one month of opex in Amazon. So we did, and then we had a problem where okay, now we need software to run on it, to make it work. We needed a virtualization platform. So we looked up what was out there, and the cost for it at the time in 2012, 2011, it was a million dollars in commercialization software. I said we can't do this, right? We're too small, we don't have the funds to do that. So we decided at that point, we're going to found the company to solve that problem, and democretize IT to give companies of any size the ability to implement cloud computing behind the firewall, at an affordable price. >> And you call it "Composable Cloud". >> We do. So when we looked at the market at that point, there was different types of technologies out there, and there was things called hyper-converged and traditional converged infrastructure. And what we did, is we took a page out of how the public cloud operated. And the way the public cloud operates is they have composable resources so you can scale resources independently. So I can scale networks separate from computes, separate from storage. And that's a big deal when you're running a cloud because you have to worry about economics. So when we architected the product, we started there. So we started with this scale-able architecture that's composable, so company's grow as they need to grow. They don't have to tie resources together, right, so there's no resource drift, we call it. It's independent scaling. And that's one of the big differentiators in our platform. >> So Rod, why don't you help bring in some of your customer views that you hear on this. I'm sorry but I smirk a little bit when I hear, "We're going to simplify things." (laughter) In my career, I've talked to lots of companies, and everybody, we always have the goal to be simple. >> Yes. >> "But, oh wait I need to change this a little bit, I need this other thing, oh wait I've got this Lenovo product, but oh, you've got this other product that's good, how do I manage all of these?" And Cloud was supposed to be, you know, just an easy button and low-cost and everything, and it's helped but it's also added new silos, and new things that I now have to get my arms around. So maybe set up for why you-- >> Yeah, sure. Well I think to Naj's point firstly, I think the Cloudistics solution is really unique. And it's very compelling, actually. It's a very compelling offering. Firstly because I know one management said that you could basically run storage, compute, as well as networking, sitting over the top of a hypervisor, on prem, his point? So to Naj's point, you had like 50% of the cost of a normal cloud infrastructure that would be going out into the market pool, and still have the management suite sitting up in the cloud that they obviously manage for you. That's very cool. But one of the other things that's very cool about the Cloudistics offering is you can scale up and scale out, depending on customers' requirement. So once you've got yourself in this composable cloud model, right? And you're actually running with Cloudistics, instead of saying okay, my business is growing, now it's getting bigger, I have to pay this much for an extra amount of x, whatever it might be, if you want more compute, you can have more compute. If you want more storage, you can have more storage. You can actually add the components of the cloud that you require, based on the consumption that your business is actually running to. And that's one of the very very compelling events that Cloudistics' offering actually has. >> Composable and customizable. >> Yeah, and very simple. One of the key tenants of the platform is making this thing really really simple. So when we designed the product when we started, we started with the application first because at the end of the day, that's what you're trying to run. You're not here to manage infrastructure, you're here to develop being agile in your business. So we focused everything on making it really simple to deploy, and making the hardware invisible, automating all of the updates, so you never have to see hardware. And all you can focus on is delivering your services. >> So I want you to get really specific for a second. >> Yeah. >> Because many of the things that I hear, they think, oh, reminds me of what the companies that do hyper-convert say that they're doing. >> Right. >> Simplicity in the enterprise... >> Right. >> Easy to manage, things like that. >> Yes. >> Is there a software offering, is there hardware involved-- >> Correct. >> How does this all go together, is this a management suite that ties in to what I have? >> That's a great question. >> Make sure I understand. >> Yeah, so it's a completely integrated hardware-software platform, so think of it like your iPhone. When you buy an iPhone, it's hardware-software beautifully integrated... >> Motorola's the same by the way. (laughter) Yeah, okay, Motorola, fine. But it's a phone that's integrated with hardware-software. You connect to the network, you're up and running, you download your apps, and you're done. It's a beautiful experience. So we took that as an analog for our platform. So literally, it's completely plug-in play cloud, you roll it in, plug it into your network, go to our Marketplace, log in, download apps and start running. You can run Containers, you can run Docker, you can run Windows Sequel, all those apps are available for you to run with a click. So businesses now can be much more agile, right? Because now they're worried about delivering services, not messing with multi-solid hardware. Right so now generalists now can manage this platform. DevOps can manage this platform. Just like the public cloud. Yep. >> So to make this setup really simple, what we're doing is we're taking the thick agile solution, which is that pre-configured, pre-set, rackable solution. So compute, storage and networking all in one solution. At factory, we're setting it up with all the Cloudistics structure that we need to send it out, and basically ship it on site for customers. They only need two plugs, right? A plug for the network, and a plug for power and basically it's ready to go. >> It's amazing. >> Rod can you help, so we were just talking about the big news with NetApp. >> Right. >> You know, you've got new relationships with tenants, how does this fit in the work folio? What are the customer kind of pain points as to when Lenovo would lead with this? >> I think that's a fair question, Stu. I think if you have a look at what our go-to-market strategy is in the hyper-conversion space. This is largely guided by customer demand. So, basically at a customer demands point, we'll go in and we'll obviously lead with our customers and understand what are the pain points they actually have in their environment. Because many customers have got different environments, and three years ago, everyone was like "I'm going to be an AWS jumper, or I'm going to be..." The reality is everyone's got so many different clouds in their environment, they've got so many different environments set up. You know, whether that's the Adobe Cloud and Marketing, or AWS, whatever it might be, you've got to manage all of these different environments. So it sort of is dependent purely on what the customers' environment is, where we actually go. Now, from our perspective, this is a brand new relationship, only 6 months old, we are setting up dedicated people specifically to sell this with Cloudistics, and I feel like it's got a really good future. We just got to get this business growing, and I think we're going to be talking to more customers about it. >> Yep. >> So who is your sweet spot? You said that the emphasis of starting this company was that companies of any size could be able to do these things, and act more agile, as you said. >> Right. >> So who is your sweet spot, what's your target? >> Yeah so we target a medium-sized enterprise. So you know, 500 employees to 5,000, kind of in that range is our initial target. And we drive the applications like Window Sequels applications, applications that rely on performance potentially, or even general purpose work clouds where they just want to simplify management of the stack. And as Rod was saying, the management of the platform's pretty unique, and the fact that that's in the cloud, the management of the platform is in the cloud. So it makes it very simple to manage. So from one central spot, I can manage my multiple stacks throughout my company, and it makes it very easy to employ applications and manage everything. >> Do you have any specific examples of sort of the pain points that you helped solve? >> Yeah, so in our case, it was really around driving simplicity. So in many companies, many medium-sized companies, they struggle with the complexity of multi-tiered infrastructure. So I have to have a virtualization expert, I have to have storage expert, I have to have a network expert. And I have to have an app expert as well. Right, I've got to make all those people work together. So businesses now are trying to be more agile to push applications out the door so they can run their business. So by all those interdependencies, it creates a lot of complexity. So we've cut out all of that and we've created a platform where you don't need all of those interdependencies. It's done for you. So it's literally plug and play, so businesses can get right to their work at deploying applications. >> So, there are a number of things that we've looked at, from a research standpoint of what makes a private cloud, and a lot of it is kind of measuring the bar against a public cloud. You said, simplicity, absolutely a good one. One of the ones that we're starting to see some movement in the private cloud, it's starting to go more opex. >> Right. >> As a service office. >> Correct. >> I was walking through the show before and talking to Lenovo people about that. Is that part of the discussion today, and maybe talk about how that works. >> It is, and the platform is fully tenanted for example. We took a page out of the public cloud where, if you go into any public cloud, you create yourself a virtual data center. And within that virtual data center, you can deploy your applications. With our platform you can do the same. You can have a pool of resources, we've extracted everything to pool. RAM, cores, and stores, that's all you need. You can allocate those to your constituents, your customers, your departments. And they have a completely multi-tenanted, fully secure environment to work under. Without impacting anybody else. And with our core technology around networking, we've completely isolated the layer 3 networking layer, to make sure it's highly secure within that box. >> I understand. So they can almost be like a service provider themselves? >> Yes. >> So I guess one of the things is, what about from the financial standpoint? Are things still allowing me to scale up and scale down, is it just in that box I can carve it up? You know Lenovo has an option that was like oh hey, I need to burst up for a certain season, but I'm not going to have to pay, or are there certain things they can do financially. >> Very, very interesting. So the platform is elastic in a sense, where you can plug in and play resources. You can add memory, you can add cores, you can add storage, you can network on demand. And jack it in and scale the resources. We are working on coming out in a future period a hybrid where you can burst and scale into public clouds, which is a big deal, right? Because we have very unique layer 3 networking technology, we can potentially stretch those networks into some other cloud, which is very interesting. So that means that our Lenovo customers can then burst into on demand, on the monthly payroll system, into a public cloud if necessary. So that's a future thing we're working on. >> To your points as a service, you heard today obviously as we had a little bit of a keynote up there, Kirk hinted at the fact that we're trying to drive as a service solution around on the hardware, which really matches perfectly with the Cloudistics solution that Naj was just talking about. >> Yes. >> We're really, really close to this. I would have loved to have been one of our announcers today. But we've got a few other things going on. So we will come forward in the market as a service, fully metering as a service solution that we think is very compelling in market to match up with the Cloudistics offering very, very shortly actually. >> It's fun. >> So how are you getting the word out? I mean we already know you need to increase your budget, that was our last guest who said that. (laughter) >> Exactly, so Naj and I went on our focal about this decision this week. >> Yes. >> We need to get the word out a lot more aggressively, and a lot more compelling than we are today. So we have dedicated resources now in Western Europe and North America, we're about to expand our dedicated resources into China and the Asian Pacific, and then down into Latin America. So we start off by dedicating people on the street that are actually going to be at the start talking to customers. Then we're going to have to drive into a marketing campaign of some description, so we can actually start to drive a more compelling story to market, so they actually get to know what Naj's company has developed. Because once again, it's really compelling. >> Right, great. Well Naj, Rod, thanks so much for coming on the show, it was great having you. >> Thank you. >> Thanks very much. >> I'm Rebecca Knight with Stu Miniman, we will have more from Lenovo Transform and theCUBE's live coverage in just a little bit. (upbeat music)
SUMMARY :
brought to you by Lenovo! Thanks so much for coming on the show. Nice to meet you. Rod, why are you so lazy at this show? and I'm running hot so I'm looking forward to a So I spoke to my CTO and I said, So we started with this scale-able architecture So Rod, why don't you help bring in and new things that I now have to get my arms around. So to Naj's point, you had like 50% of the cost And all you can focus on is delivering your services. Because many of the things that I hear, they think, When you buy an iPhone, it's hardware-software So we took that as an analog for our platform. So to make this setup really simple, the big news with NetApp. specifically to sell this with Cloudistics, able to do these things, and act more agile, as you said. So you know, 500 employees to 5,000, So I have to have a virtualization expert, in the private cloud, it's starting to go more opex. and talking to Lenovo people about that. You can allocate those to your constituents, So they can almost be like a service provider themselves? So I guess one of the things is, So the platform is elastic in a sense, on the hardware, which really matches perfectly So we will come forward in the market as a service, I mean we already know you need to increase your budget, Exactly, so Naj and I went on our focal So we start off by dedicating people on the street Well Naj, Rod, thanks so much for coming on the show, we will have more from Lenovo Transform
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Rod Lappin | PERSON | 0.99+ |
Naj Husain | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Kirk | PERSON | 0.99+ |
100,000 bucks | QUANTITY | 0.99+ |
3 months | QUANTITY | 0.99+ |
Latin America | LOCATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rod | PERSON | 0.99+ |
two guests | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
less than one month | QUANTITY | 0.99+ |
Motorola | ORGANIZATION | 0.99+ |
Western Europe | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
2 and a half hours | QUANTITY | 0.99+ |
Asian Pacific | LOCATION | 0.99+ |
5,000 | QUANTITY | 0.99+ |
Docker | TITLE | 0.99+ |
6 months | QUANTITY | 0.99+ |
500 people | QUANTITY | 0.99+ |
two plugs | QUANTITY | 0.99+ |
Cloudistics | ORGANIZATION | 0.99+ |
Firstly | QUANTITY | 0.98+ |
western Virginia | LOCATION | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Naj | PERSON | 0.98+ |
500 employees | QUANTITY | 0.98+ |
three years ago | DATE | 0.98+ |
Cloudistics, Inc. | ORGANIZATION | 0.98+ |
this week | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
2011 | DATE | 0.97+ |
Window Sequels | TITLE | 0.96+ |
Lenovo Transform | ORGANIZATION | 0.96+ |
150,000 a month | QUANTITY | 0.96+ |
Windows Sequel | TITLE | 0.95+ |
200,000 a month | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
twice | QUANTITY | 0.94+ |
last night | DATE | 0.93+ |
500 | QUANTITY | 0.92+ |
Stu | PERSON | 0.92+ |
opex | ORGANIZATION | 0.9+ |
layer 3 | OTHER | 0.89+ |
one solution | QUANTITY | 0.89+ |
2018 | DATE | 0.89+ |
first | QUANTITY | 0.85+ |
NetApp | TITLE | 0.85+ |
Lenovo Transform 2.0 | TITLE | 0.85+ |
Cloudistics' | ORGANIZATION | 0.85+ |
million dollars | QUANTITY | 0.84+ |
agile | TITLE | 0.84+ |
one management | QUANTITY | 0.8+ |
Containers | TITLE | 0.79+ |
month two | QUANTITY | 0.78+ |
theCUBE Insights from VMworld 2018
(upbeat techno music) >> Live from Las Vegas, it's theCUBE covering VMworld2018 brought to by VMware and it's ecosystem partners. >> Welcome back to theCUBE, I am Lisa Martin with Dave Vellante, John Furrier, Stu Miniman at the end of day two of our continuing coverage, guys, of VMworld 2018, huge event, 25+ thousand people here, 100,000+ expected to be engaging with the on demand and the live experiences. Our biggest show, right? 94 interviews over the next three days, two of them down. Let's go, John, to you, some of the takeaways from today from the guests we've had on both sets, what are some of the things that stick out in your mind? Really interesting? >> Well we had Michael Dell on so that's always a great interview, he comes on every year and he's very candid and this year he added a little bit more color commentary. That was great, it was one of my highlights. I thought the keynote that Sanjay Poonen did, he had an amazing guest, Nobel Peace Prize winner, the youngest ever and her story was so inspirational and I think that sets a tone for VMware putting a cultural stake in the ground around tech for good. We've done a lot of AI for good with Intel and there's always been these initiatives but I think there's now a cultural validation that people generally want to work for and buy from companies that are mission driven and mission driven is now part of it and people can be judged on that front so it's good to see VMware get some leadership there and put the stake in the ground. I thought that was the big news today, at least from my standpoint. The rest were like point product announcements. Sanjay Poonen went into great detail on that. Pat Gelsinger also came on, another great highlight and again we didn't have a lot of time, he was running a bit late, he had a tight schedule but it shows how smart he is, he's really super technical and he actually understands at a root level what's going on so he's actually a great CEO right now, the financial performance is there and he's also very technical, and I think it encapsulates all of it that Dell Technologies, under Michael Dell, he's making so much more money, he's going to be richer and richer. (laughing) He took an entrepreneurial bet, it wasn't hurting at the time but Dell was kind of boring, Dave. I wouldn't call it like an innovative company at the time when they were public using the 90 day shot clock. They had some things going on but they were a hardware company, a supplier to IT footprints-- >> Whoa, whoa, they were 60 billion dollars in revenue and a 20 billion dollar market gap, so something was broken. >> Well I mean it was working numbers wise but he seemed-- >> No that's opposite, a 20 billion dollar value on a 60 billion of revenue, is you're sort of a failure, so anyway, at the time. >> Market conditions aside, right, at the time, he seemed like he wanted to do something entrepreneurial and the takeaway from my interview with him, our interview with him, was he took an entrepreneurial bet put his own cash on the table and it's paying off, that horse is coming in. He's going to make more money on this transaction and takes EMC out of the game, folds it into the operations, it really is going to be, I think, a financial success story if market conditions continue to be the way they are. Michael Dell will go down as a great financial maneuver and he'll be in the top epsilon of deals. >> The story people might forget is that Carl Icahn tried to take the company away from him. Michael Dell beat the great Carl Icahn, which doesn't happen often. Why did Carl Icahn want to take Dell private? Because he knew he could make a boatload of money off of it and Michael Dell said, "No way you're taking my company. "I'm going to do my thing and change the industry." >> He's going to have 90% voting control with Silver Lake Partners when the deal is all said and done and taking a company private and the executing the financial engineering plus execution is really hard to do, look at Elon Musk in the news today. He's trying to take Tesla private, he got his butt handed to him. Now he's saying, "No, we're going to stay public." (laughing) >> Wait, guys, are you saying Michael, after he gets all this money from VMware that it will help them go public, he's not going to sell off VMware or get rid of that, right? >> Well that's a joke that he would sell VMware, I mean-- >> Unless the cash is going to be good? >> No, he won't do it. >> I don't think it'll happen. I mean, maybe some day he sells some of the portion of it but you're not going to give up control of it, why would he? It's throwing off so much cash. He's got Silver Lake as a private equity company, they understand this inside and out. I mean this transaction goes down in history as one of the greatest trades ever. >> Yeah. >> Let me ask you guys a question, because I think is one we brought up in the interview because at that time, the pundits, we were actually right on this deal. We were very bullish on it, and we actually analyzed it. You guys did a good job at Wikibon and we on theCUBE pretty much laid out what happened. He executed it, we put the risks out there, but at the time people were saying, "This is a bad deal, EMC." The current state of IT at that time looked like it was dismal but the market forces that changed were cloud, and so what were those sideways impact points that no one understood, that really helped him lift this up? What's your thoughts, Dave, on that? >> First of all the desktop business did way better than anybody thought it would, which is amazing and actually EMC did pretty poorly for a while and so that was kind of a head fake. And then as we knew, VMware crushed it and crushed it even more than anybody expected so that threw off so much cash they were able to deliver, they did Pivotal, they did a Pivotal IPO, sold some software assets. I mean basically Michael Dell and his team did everything they said they said they were going to do and it's worked out, as he said today, even better than they possibly thought. >> Well and the commentary I'd give here is when the acquisition of EMC by Dell happened, the big turn we had is the impact of cloud and we said, "Well, okay they've got VMware over there "and they've got Pivotal but Dell's "just going to be a boring infrastructure company "with server, network and storage." The message that we heard at Dell World and maturing even more here is that this portfolio of families. Yes, VMware's a big piece of it, NSX and the networking, but Pivotal with PKS, all of those tie in to what's Dell's selling. Every time they're selling VxRail, you know that has a big VMware piece. They do the networking piece that extends across multi clouds, so Dell has a much better multi cloud story than I expected them to have when they bought EMC. >> But now, VMware hides a lot of warts. >> Yeah. >> Right? >> Absolutely. >> Let's be honest about that. >> What are they? >> Okay. I still think the client business is exposed. I mean as great as it is, you got to gain share in that business if you want to keep winning, number one. Number two is, the big question I have is can the core of Dell EMC continue to innovate or will it just make incremental improvements, have to do acquisitions to do innovation, inorganic acquisitions, and end up with more stovepipes? That's always been, Stu used to work there, that was always EMC's biggest challenge. Jeff Clark came in and said, "Okay, we're going to rationalize the portfolio." That has backlash as customer's say, "Well wait a minute, does that mean "you're not going to support my products?" No, no, we're going to support your products. So they've got to continue to innovate. As I say, VMware, because of how much cash it throws off, it's 50% of the company's profits, hides a lot of those exposures. >> And if VMware takes a turn, if market conditions change, the debt looming is exposed so again, the game's not over for Dell. He can see the finish line, but. (laughing) >> Buy low, sell high, guess who's selling right now? >> So a lot of financial impact, continued innovation but at the end of the day, guys, this is all about impacting customer's businesses. Not just from we've got to enable them to be successful in this multi cloud era, that's the norm today. They need to facilitate successful digital transformations, business outcomes, but they also have VMware, Dell EMC, Dell Technologies, great power to help customer's transform their cultures. I'd love to get perspective from you guys because I love the voice to the customer, what are some of your favorite Dell EMC, VMware, partner, customer stories that you've heard the last couple days that really articulate the value of this financial successful company that they're achieving? >> Well the first thing I'll say before we get to the customer stories is on your point about what VMware's doing, is they're a technology, Robin Matlock, the CMO was on theCUBE talking about they're a technology company, they have the hands on labs, they're a very geeky audience, which we love. But they have to get leadership on the product side, they got to maintain the R and D, they got to have best in class technical products that actually are relevant. You look at companies like Tintri that went bankrupt, great technology, cul-de-sac market. There's no market there, the world's going cloud. So to me VMware has to start pumping out really strong products and technologies that the customer's are going to buy, right? (laughing) >> In conjunction with the customer to help co-develop what the customer's need. >> So I was talking to a customer and he said, "Look, I'm 10 years behind where the cloud guys are "with Amazon so all I want is VMware "to make my life easier, continue to cut my costs. "I like the way I'm operating, "I just get constant pressure to cut cost, "so if they keep doing that, I'm going to stay with them "for a long, long time." Pete Townsend said it best, companies like VMware, Dell EMC, they move at the speed of the CIO and as long as they can move at the speed of the CIO, I've said this a million times, the rich get richer and it's why competent management that led by founders like Larry Ellison, like Michael Dell, continue to do well in this industry. >> And Andy Jassy technically, I would say, a found of AWS because he started it. >> Absolutely. >> A key, the other thing I would also say from a customer, we hear a lot of customer, I won't name names because a lot of our data's in hallway conversations and at night when we go out and get the real stories. On theCUBE it's mostly, oh we've been very successful at VM, we use virtualization, blah, blah, blah and it's an IT story, but the customers in the hallways that are off the record are saying essentially this, I'm paraphrasing, look it, we have an operation to run. I love this cloud stuff and I'd love to just blink my fingers and be in the cloud and just get rid of all this and operate at a level of cloud native, I just can't. I can't get there. They see Amazon's relationship with VMware as a bridge to the future and takes away a lot of cognitive dissonance around the feelings around VMware's lack of cloud, if you will. In this case, now that's satisfied with the AWS deal and they're focused on operations on premises and how to get their app more closed, like modernize so a lot of the blocking and tackling of the customer is I got virtualization and that's great but I don't want to miss out on the next lever of innovation. Okay, I'm looking at it going slow but no one's instantly migrating to the cloud. >> No way, no way. >> They're either born in the cloud or you're on migration schedules now, really evaluating the financial impact, economic impact, headcount impact of cloud. That's the reality of the cloud. >> You got to throw a flag on some of that messaging of how easy it is to migrate. I mean it's just not that easy. I've talked to customers that said, "Well we started it and we just kind of gave up. "There was no point in it. "The new stuff we're going to do in the cloud, "but we're not going to migrate all of our apps to the cloud, "it just makes no sense, there's no business case for it." >> This is where NSX and containers and Kubernetes bet is big, I think, I think if NSX can connect the clouds with some sort of interoperable layer for whatever workloads are going to move on either Amazon or the clouds, that's good. If they want to get the developers off virtualization, into a new drug, if you will, it's going to be services, micro services, Kubernetes because you can throw containers around those old workloads, modernize with the new stuff without killing the old and Stu and I heard this clear at the CNCF and the Lennox Foundation, that this has changed the mindset because you don't have to kill the old to bring in the new. You can bring in the new, containerize the old and manage on your speed of the CIO. >> And that's Amazon's bet isn't it? I mean, look, even Sanjay even said, if you go back five, six years, the original reinvent that was sweep the floor, bring it all into the cloud? I think that's in Amazon's DNA. I mean ultimately that's their vision. That's what they want to have happen and the way they get there is how you just described it, John. >> That's where this partnership between Amazon and VMware is so important because, right, Amazon has a lot of the developers but needs to be able to get deeper into the enterprise and VMware, starting to make some progress with the developers, they've got a code initiative, they've got all of these cool projects that they announced with everything from server less and Kubernetes and many others, Edge going to be a key use case there but you know, VMware is not, this is not the developer show. Most of the conversations that I had with customers, we're talking IT things, I mean customers doing some cool things but it's about simplifying in my environment, it's about helping operations. Most of the conversations are not about this cool new micro services building these things out. >> Cisco really is the only legacy, traditional enterprise company that's crushing developers. You give IBM some chops, too, but I wouldn't say they're crushing it. We saw that at Cisco Live, Cisco is doing a phenomenal job with developers. >> Well the thing about the cloud, one thing I've been pointing out, observation that I have is if you look at the future of the cloud and you can look for metaphors and/or real examples, I think Amazon Web Services, obviously we know them well but Google Cloud to me is a picture of the future. Not in the sense of what they have for the customer's today it's the way they've run their business from day one. They have developers and they have SREs, Site Reliability Engineers. This VMworld community is going down two paths. Developers are going to be rapidly iterating on real apps and operators who are going to be running systems. That's network storage, all integrated. That's like an SRE at Google. Google's running massive scale and they perfected it, hence Kubernetes, hence some of the tools coming in to services like Istio and things that we're seeing in the Lennox Foundation. To me that's the future model, it's an operator and set of developers. Whoever can make that easy, completely seamless, is the winner of it all. >> And the linchpin, a linchpin, maybe not the linchpin, but a linchpin is still the database, right? We've seen that with Oracle. Why is Amazon going so hard after the database? I mean it's blatantly obvious what their strategy is. >> Database is the hill that everyone is trying to take down. Capture the hill, you get the high ground with the database. >> Come on Dave, when you used to do the financial models of how much money is spent by the enterprise, that database was a big chunk. We've seen the erosion of lots of licensing out there. When I talked to Microsoft, they're like, pushing a lot of open source, they're going to cloud. Microsoft licensing isn't as much. VMware licensing is something that customers would like to shrink over time but database is even bigger. >> It's a strategic fulcrum, obviously Oracle has it. Microsoft clearly has it with Sequel Server. IBM, a big part of IBM's success to this day, is DB2 running on mainframe. (laughing) So Amazon wants a piece of that action, they understand to be a major player in this business you have to have database infrastructure. >> I mean costs are going down, it's going to come down to economics. End of the day the operating models as I said, some things about DB2 on mainframe, the bottom line's going to come down to when the cost numbers to run at the value and cost expense involved in running the tech that's going to be the ultimate way that things are either going to be cleared out or replaced or expanded so the bottom line is it's going to be a cost equation at that level and then the upside's going to be revenue. >> And just a great thing for VMware, since they don't own the application, when they do things like RDS in their environment they are freeing up dollars that customers are then going to be more likely to want to spend with VMware. >> Great point. I want to make real quick, three things we've been watching this week. Is the Amazon VMware deal a one way trip to the cloud? I think it's clear not in the near term, anyway. And the second is what about the edge? The edge to me is all about data, it's like the wild, wild west. It's very unclear that there's a winner there but there's a new type of cloud emerging. And three is the Dell structure. We asked Pat, we asked VMware Ray O'Farrell, we asked Michael, if that 11 billion dollar special dividend was going to impact VMware's ability to fund it's future? Consistent answer there, no. You know, we'll see, we'll see. >> I mean what are they going to say? Yeah, that really limits my ability to buy companies, on theCUBE? No, that's the messaging so of course, 11 billion dollars gone means they can't do MNA with the cash, that means, yeah it's going to be R and D, what does that mean? Investment, so I think the answer is yes it does limit them a little bit. >> Has to. >> It's cash going out the door. >> But VMware just spent, it is rumored, around 500 million dollars for CloudHealth Technologies, Dave, Boston based company, with about 200 people You know, hey, have a billion-- >> They're going to put back a dividend anyway and do stock buybacks but I'm not sure 11 out of the 13 billion is what they would choose to do that for, so going forward, we'll see how it all plays out, obviously. I think, Floyer wrote about this, more has to go toward VMware, less toward-- >> I think it's the other way around. >> Well I think it's really good that we have one more day tomorrow. >> I think it's a one way trip to the cloud in a lot of instances, I think a lot of VMware customers are going to go off virtualization, not hypervisor and end up being in the cloud most of the business. It's going to be interesting, I think the size of customers that Amazon has now, versus VMware is what? Does VMware have more customers than Amazon right now? >> It's pretty close, right? VMware's 500,000? >> 500,000 for VMware. >> And Amazon's-- >> Over a million. >> Are they over a million, really? >> Yeah. >> A lot of smaller customers, but still. >> Yeah. >> Customer's a customer. >> But VMware might have bigger customers, see that's-- >> No question the ASP is higher, but-- >> It's not conflict, I'm just thinking like cloud is natural, right? Why wouldn't you want to use the cloud, right? I mean. >> So guys-- >> So the debate continues. >> Exactly. Good news is we have more time tomorrow to talk more about all this innovation as well as see more real world examples of how VMware is going to be enabling tech for good. Guys, thanks so much for your commentary and letting me be a part of the wrap. >> Thank you. >> Thanks, Lisa. >> Looking forward to day three tomorrow. For Dave, Stu and John, I'm Lisa Martin. You've been watching our coverage of day two VMworld 2018. We look forward to you joining us tomorrow, for day three. (upbeat techno music)
SUMMARY :
brought to by VMware and and the live experiences. and put the stake in the ground. and a 20 billion dollar market so anyway, at the time. and he'll be in the top epsilon of deals. and change the industry." Elon Musk in the news today. sells some of the portion of it but at the time people were saying, First of all the desktop business Well and the commentary I'd give here it's 50% of the company's profits, He can see the finish that really articulate the value that the customer's are going the customer's need. "I like the way I'm operating, I would say, a found of AWS and be in the cloud in the cloud or you're on all of our apps to the cloud, the old to bring in the new. and the way they get there is how you Amazon has a lot of the developers Cisco really is the only legacy, Not in the sense of what they a linchpin, maybe not the linchpin, Database is the hill that We've seen the erosion of success to this day, the bottom line's going to come down to are then going to be more And the second is what about the edge? No, that's the messaging so of course, out of the 13 billion is that we have one more day tomorrow. cloud most of the business. to use the cloud, right? and letting me be a part of the wrap. We look forward to you joining
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Pete Townsend | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
60 billion | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Robin Matlock | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
CloudHealth Technologies | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Carl Icahn | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Lennox Foundation | ORGANIZATION | 0.99+ |
13 billion | QUANTITY | 0.99+ |
Sanjay | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Silver Lake Partners | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
90 day | QUANTITY | 0.99+ |
94 interviews | QUANTITY | 0.99+ |
Floyer | PERSON | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Keith Moran, Nutanix | VMworld 2018
>> Live from Las Vegas, it's theCUBE covering VMworld 2018. Brought to you by VMware and its ecosystem partners. >> Welcome back to theCUBE's coverage of VMworld 2018. Two sets, wall-to-wall coverage. We had Michael Dell on this morning. We had Pat Gelsinger on this afternoon. And happy to welcome to the program, first time guest, Keith Moran, who's the vice president with Nutanix. Keith, I've talked to you lots about theCUBE, you've watched theCUBE, first time on theCUBE. Thanks so much for joining us. Yeah, thanks for having me. It's a great show. >> Alright, so let's set the stage here. We're here in Vegas. It's my ninth year doing VMworld. How many of these have you done? >> So this is my fourth. >> Yeah? How's the energy of the show? The expo hall's hopping. You guys have a nice booth. What are you hearing from the customers here? >> I think that we're seeing just a lot of discussion around where the market's going with hybrid cloud. I think that it's a massive opportunity. I think people are trying to connect the dots on where it's going in the next five years. The vibe's extremely strong right now. >> I've met you at some of the Nutanix shows in the past and seen you at some of these, but tell us a little bit about your role, how long you've been there, where you came from before. >> I run the Central US for Nutanix, and I spent a long time in the converged, whether it was that app at EMC, through a few start-ups, and then I've been at Nutanix for four years. It's been a great ride, seeing how the market's adopting to hyperconverged. The core problem and vision that Dheeraj saw nine years ago is playing out. He's five chess moves ahead of everyone. I think there's, again, a massive opportunity as we move forward. >> Keith, I love your to share. I love people in the field. You're talking to customers every day. You hear their mindset. I think back over the last 15 years in my career, and when Blade Server first came out, or when we started building converged solutions. It was like, "Oh, wait." Getting the organization together, sorting out the budgets. There were so many hurdles because this was the way we did things, and this is the way we're organized, and this is the way the budgets go. I think we've worked through a number of those, but I'd love to hear from you where we are with most customers, how many of them are on board, and doing more things, modernizing, and making changes, and being more flexible. >> Yeah, so I think you're spot on in the sense that the silos was the enemy in the sense that people were doing business as usual and that there was process, and they didn't want to take risks. But I think that the wave of disruption has been so strong and that we're in this period of mass extinction where customers, They don't have a choice anymore. That they have to protect against the competitive threat or exploit opportunity, and I think that the speed and the agility with hyperconverged is, And what the market disruption is forcing them to make those changes and forcing them to innovate. At the end of the day, that's their core revenue stream is how they experiment, how they innovate. Again, you're seeing the disruptions coming so fast that people are changing to survive. >> Yeah, we have some interesting paradoxes in the industry. We're talking about things like hyperconverged, yet really what we're trying to do is build distributed architectures. >> Correct. >> We're talking about, "Oh, well I want simplicity, and I want to get rid of the silos, but now I've got multicloud environment where I've got lots of different SaaS pieces, I've got multiple public clouds, I often have multiple vendors in my public cloud, and I've like recreated silos and certifications and expertise." How do customers deal with that? How do you help, and your team help to educate and get them up so that hopefully the new modern era is a little bit better than what they were dealing with? >> Yeah, and I think that's part of where the opportunity is. I think that the private cloud people don't do public well, and I don't think that the public cloud vendors do private well. So that's why the opportunity's so big. And I think for us, we're going to continue to harden the IaaS stack of what we built, and then our vision is how do we build a control plane for the next generation. If you look at our acquisition strategy, and where we're putting in it, how do you have a single operating system that spans the user experience from the public to private, making an exact replica. Again, I think customers are struggling with this problem and that as apps scale up, and scale down, and the demand for them, that they want this ability to course correct and be able to move VMs and containers in a very seamless fashion from one app to the next and adjust for the business market conditions. >> Yeah, I had a comment actually by one of my guests this week. We now have pervasive multicloud. We spent a few years sorting out who are the public clouds going to be. And there's still moves and changes, but we know there's a handful of the real big guys, then there's the next tier of all of the server providers, and the software players, like Nutanix. Look, you're not trying to become a competitor at Amazon or Google. You're partners. I see Nutanix at those shows. So maybe explain what's the long-term strategy. How does Nutanix, as you've been talking about enterprise cloud for a number of years, but what's that long-term vision as to how Nutanix plays in this ecosystem? >> Yeah. So for us I think part of it is our own cloud, which is Xi, and it's living in this multicloud world where our customer can do DRs of service with that single operating system, moving it from a Nutanix on-prem solution, moving it to a Nutanix cloud, moving it to Azure, moving it up to TCP, or moving it to AWS. And they have to do with it with thought because clearly there are so many interdependencies with these apps. There's governance, there's laws of the land, there's physics. There's so many things that are going to make this a complex equation for customers. But again, they're demanding, and that's forcing the issue where customers have to make these decisions. >> Keith, I want to hear, when you talk to your customers, where are they with their cloud strategy? I heard a one conference, 85% of customers have a cloud strategy, and I kind of put tongue in cheek. I said, "Well 15% of the people got to figure something out, and the other 85, when you talk to them next quarter, the strategy probably has changed quite a bit." Because things are changing fast, and you need to be agile and be able to change and adjust with what's going on. So where do your customers, I'm sure it's a big spectrum but? >> It is. The interesting thing for me for cloud is on average, we're seeing that the utilization rate, specifically in AWS, is somewhere in the 25% rate for reserved entrance, which was very surprising to me because the whole point of cloud is to test it, to deploy it, and to scale up, and if you're running in an environment where the utilization rate that the economics aren't working. So I think that people are starting to look at, alright, what are the economics behind the app? Does it make sense in the cloud? Does it make sense on-prem? Again, what are the interdependencies of it? The classic problems they're having are still around. They're spending 80% of their time just managing firmware and drivers and spending thousands of hours per quarter just troubleshooting and not impacting the business. So I think, fundamentally, that's what the customers are trying to solve is how do we get out of this business of spending all our time keeping the lights on and how do we drive innovation. And that ratio has been historically for 20 years. And I think, again, Nutanix helps drive that in the sense that we're helping customers shift that ratio and that pain. I always say, "Put your smartest people on your hardest problems," and when you've got these high-end SAN administrators spending a lot of time, they should be working on automation, orchestration, repeatable process that gives scale and again, impacts the business. >> Yeah. A line that I used at your most recent Nutanix show is talking to customers. Step one was modernize the platform, and step two, they could modernize the application. >> Absolutely. >> Speak a little bit to that because in this environment, we know the journey we went through to virtualize a lot of applications. I talked to a Nutanix customer this morning and talked about deploying Oracle, and I said, "Tell me how that was," because how many years did we spend fighting as customers? "You want to virtualize Oracle?" And Oracle would be like, "No, no, no. You have to use OVM. You have to use Oracle this. You have to use Oracle that." We've gone through that. And is it certified on Nutanix? It's good to go. It's ready to go. He's like, "It was pretty easy." And I'm like, it's so refreshing to see that. But when you talk about new modern applications and customers have this whole journey to embrace things like Agile, LMC ICD, and the like. Where does Nutanix play in this, and how are you helping? >> Yeah, so I think on the first. When you look at the classic database, so things like Sequel were automating so that you can extract it in a very simple manner. You look at the mode 2 apps like Kubernetes, we're taking a 37 page deployment guide and automating it down into three clicks because customers want the speed, they want the deployment cycles, they want the automation associated with that. And it's having a big impact in the sense that these customers are trying to figure out, "Where am I going here in the next three years?" For us, we're seeing massive workloads, whether it's Oracle, Sequel, people deploying on it. And again, there's so much pressure for people to change and constantly disrupt themselves, and that's what we're seeing. And layer that all on top of a lot of legacy apps. So we've got oil and gas customers, and big retailers, and when they show us the dependency maps of their applications, it's incredible. How complex these are, and they want simplicity and speed, and how do they get out of that business of the tangled mess. >> Yeah. Keith, I wonder if you have an example, and you might not be able to use an exact customer, but you mentioned some industries, so here's something I hear at a show like this. Alright, I understand my virtualized environment. I've deployed HCI. I really need to start extending and using public cloud. What are some first steps that you've seen customers as to how they're making that successful? What are some of those important patterns, what works, and where's good places for them to start? >> I look at it almost, when I see some of the automation deployment cycles they have of how they get a VM through the full lifecycle, and behind the scenes they have such massive complexities that it's hindering their ability to create automations. So the first layer is how do you simplify the infrastructure underneath, and it goes back to that dependency map. So again, oil and gas, that's big retailers. When they show us what their infrastructure is, they want to simplify that layer first, and then from there they can build incredible automation that gives them a multiple in the return that is much greater than what they're seeing in today's infrastructure. >> Keith, what's exciting you in the marketplace today? You get to meet with a lot of customers. Just kind of an open-ended. >> So for me, it's I've worked in a lot of big legacy companies, and I've never seen customers that have the passion towards Nutanix. And I think that it's the problems that we're solving for them, the impacts we're having on the business is driving that loyal following. But again, how fast people are either trying to exploit a competitive advantage or protect against a threat, that it's interesting to be right in this, in the epicenter of this big shift that's happening, right? Tectonic plates are shifting in that you've got a massive cloud provider like AWS. You've got a big player like VMware. What's the next generation going to look like? For me it's fascinating to see how these businesses are competing. I look at a customer. I've got a Fortune 500, The CTO's comment to me was, "I'm one app away from disruption." So they're a massive commercial real estate organization, and he's terrified of what could happen next, and he's got to stay way ahead of the curve, and I think that the innovation rate that we're bringing, the support, the infrastructure. I think it's a great place because of how we're serving what we call the underserved customer and having a big impact. >> Yeah. It's interesting. We always poke at the how much are customers just dreading that potential disruption and how much are they excited about what they can do different. You talk about working with traditional vendors in IT for the last decade or so, it's like IT and the business were kind of fighting over it. There's a line one of our hosts here, Alan Cohen, used to use. Actually, the first time I heard it was at the Nutanix show in Miami when we had it on. And he said there's this triangle, and where you want to get people is away from the no and the slow, and get them to go. Do you feel more people are fearful, or more people are excited. Is it a mix of-- >> It is. >> Those for your customers? >> And again, I think that the marketforce is really helping because people there they have to shift to stay competitive, and they're pushing every day to the level of change and how people are embracing change is much faster than it was. Because again, these disruption cycles are much faster and they're coming at customers in a totally different way that they weren't prepared for. >> Alright, Keith, final word from you is how many of theCUBE interviews have you watched in the last bunch of years? >> The content, I mean, it's off the charts. Hundreds and hundreds of hours, I would say. >> Well, hey. Really appreciate you joining us. Keith Moran, not only a long-time watcher, but now a CUBE alumni with the thousands that we've done. So pleasure to talk with ya on-camera, as well as always off-camera. >> Yeah, great stuff, Stu. >> We'll be back with lots more coverage here from VMworld 2018. I'm Stu Miniman, and thanks for watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by VMware and its ecosystem partners. Keith, I've talked to you lots about theCUBE, Alright, so let's set the stage here. How's the energy of the show? I think that we're seeing just a lot of discussion in the past and seen you at some of these, seeing how the market's adopting to hyperconverged. but I'd love to hear from you where we are and the agility with hyperconverged is, Yeah, we have some interesting paradoxes in the industry. and I want to get rid of the silos, and adjust for the business market conditions. and the software players, like Nutanix. And they have to do with it with thought and the other 85, when you talk to them next quarter, So I think that people are starting to look at, is talking to customers. and how are you helping? and speed, and how do they get out of that business and you might not be able to use an exact customer, and behind the scenes they have such massive complexities You get to meet with a lot of customers. and he's got to stay way ahead of the curve, and get them to go. and they're pushing every day to Hundreds and hundreds of hours, I would say. So pleasure to talk with ya on-camera, I'm Stu Miniman, and thanks for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Moran | PERSON | 0.99+ |
Alan Cohen | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Michael Dell | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Dheeraj | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
25% | QUANTITY | 0.99+ |
37 page | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ninth year | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
85% | QUANTITY | 0.99+ |
Hundreds | QUANTITY | 0.99+ |
next quarter | DATE | 0.99+ |
first layer | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.99+ |
Two sets | QUANTITY | 0.99+ |
five chess | QUANTITY | 0.99+ |
85 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
thousands | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
2 apps | QUANTITY | 0.98+ |
first steps | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
one app | QUANTITY | 0.97+ |
nine years ago | DATE | 0.96+ |
this week | DATE | 0.96+ |
Azure | TITLE | 0.96+ |
today | DATE | 0.95+ |
Kubernetes | TITLE | 0.95+ |
three clicks | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
one conference | QUANTITY | 0.94+ |
this afternoon | DATE | 0.93+ |
VMworld | EVENT | 0.93+ |
VMworld 2018 | EVENT | 0.92+ |
Stu | PERSON | 0.91+ |
last decade | DATE | 0.9+ |
CTO | ORGANIZATION | 0.89+ |
this morning | DATE | 0.89+ |
thousands of hours per quarter | QUANTITY | 0.88+ |
Step one | QUANTITY | 0.87+ |
step two | QUANTITY | 0.85+ |
next five years | DATE | 0.85+ |
Carol Carpenter & Navid Erfani-Ghadimi | Google Cloud Next 2018
>> Live from San Francisco, It's theCUBE. Covering Google Cloud Next 2018. Brought to you by Google Cloud and it's Ecosystem partners. >> Hey, welcome back everyone. We are live in San Francisco, CUBE coverage for Google Cloud Next 18. I'm John Furrier, Jeff Frick. Our next guest is Carol Carpenter, Vice President of product marketing here at Google Cloud, and Navid Erfani-Ghadimimi, welcome to theCUBE. >> Thank you very much. >> Thanks for coming on. So data for good has been a topic we were just talking about here, day three What do you guys do? And what's your relationship with Google? Because big data for good is really, with cloud computing more relevant than ever before. Take a minute to explain your project. >> Sure, so in South Africa we are a social nonprofit organization. We try and connect young people that are not employed, never employed to opportunities. And we are hosted in Google Cloud, and we use GCP as our sole provider. And what we try and do is we use data to be able to understand young people, understand the facets that make a young person employable and match them to opportunities that we find. So we describe opportunities using different data points. So all those data points that we have, we store them in Cloud Sequel, and we store them in BigQuery. And then we run analytics and matching to be able to find how these young people can contribute to the economy. >> How's it going so far? >> So far it's been great. It's allowed us to think about the 10X strategies. When we were an on PRAM business, we very limited by what could provide, bricks and mortar, and now we're looking and saying, well how do we provide as much capacity and capability to these young people using cell service channels? So it really has just opened up a world of possibilities. And we're really looking at it. And we're very excited because we've taken on some initiatives in Rwanda as well. And so we're taking on a global and Africa-wide kind of strategy, which I think without Cloud we really wouldn't be able to do. >> I wonder if you could just drill down, because what are some of the data points that you look at and you measure? And is it identifying the data points and then finding the match? Or is it finding the critical ones that you really need to address as a priority to get kids to that position where they can get a job? >> I mean it's really interesting because what we talk about, we talk about proxies for competence. So if you think about when you go apply for a job, you kind of say hello, here I am and I've done this job for so many years, and that's your proxy for competence. So if you're a young person that just has a high school education and you're stepping in, we need to be able to describe you as a human, right? So for those things we look and say, what are your biographic information? What's your socialization? What kind of grit and energy do you bring to the job? So we try and measure those things and we have as many contact points as we can get to be able to understand, who is this individual, really? And use those data points, and we have about 155 aspects that we use right now, and then match them to different entry-level jobs. >> So you're the Enterprise Architect of Harambee Youth and Employment Accelerator. I love that term, accelerator. >> Yes, right. >> And I also love the term Enterprise Architect, because both are indeed of some clout. One of the themes is digital transformation, which is kind of a generic term, the analysts all talk about it. But really we're talking about the cloud mobile digital world and the power that can bring. Accelerator on the youth side, they need an app. So you're essentially providing a digital capability, not the old brick and mortar. >> That's right. >> How do you architect all of this? Because you got to assume there's an app at the edge, either a downloadable app or website, phone-- >> So we have actually quite an interesting problem to solve, because for our young people, they don't have access to apps. The majority of our young people are on feature phones, basic phones, not smart phones. And data in South Africa is very expensive. So for that young person, we need to provide as low a touch at a connection point, to our services, without making that cost them something, right? So we built a very basic Mobi site, no JavaScript, as blank as you can get. It's very boring if you look at it. >> So lightweight. >> Very lightweight. But it's the tip of an iceberg. So from there we collect certain information, but then we have an award-winning contact center that makes 35 thousand calls every month. And we engage with a young person in an up down poll for about 15 minutes. And it's that 15 minutes that we use to talk to this young person, understand about them, figure out who they are and what they are, and use that to gather our data points. We then have assessments that we run. So we run psychometric assessments, we have competence assessments, and we gather all those data points and we start understanding this young person in a way that we can go to an employer, because on that side for the employer, we need to be able to say you trust us that when we give you this young person, that we say this person will do well in your job. Well you have to have trust in us to be able to do that. So we need to provide that data to say well, this is how we came up with it. So we take quite a lot of effort in that. >> You're verifying in a way, putting your reputation on the line with the candidates. >> Yes. >> At the same time, you don't know when the inbound touch is going to happen, so you got to have all that material ready to go. >> That's right. >> That's where the big data kicks in. >> That's right. So the big data, the collection of that information, and the understanding of it... And we're on a journey to start figuring out, how can we use artificial intelligence, how can we use ML in a way that improves our accuracy, but at the same time, leaves out anything that may be biased toward these young people. So we're taking a very cautious approach to it. But it's a lot of big data. We're trying to consume it as best we can. Plus, we're trying to think about, how do we provision our services for the employers? Because again, it's a demand at business, so we want to find as many jobs as we can so we can take young people to those jobs. So extend our reach to the employers and-- >> The heavy lifting, so that they don't have to. >> Yeah, so they don't have to. >> Carol, talk about the dynamic with Google Cloud, because this is the theme we're hearing all week. You guys do the heavy lifting, and at the edge of the user experience, you take the toil out of it. The word toil has been-- >> It keeps coming up. >> It keeps coming up. Thinking of that toil, the hard work, friction out of it. In this case, the connectivity costs, being productive at that point of transaction... >> Exactly. >> They're doing the back end heavy lifting. This is kind of like a core theme across. >> That is what the promise of the Cloud is supposed to be, right? Which is to remove all that back end toil, I love that word too, the toil, the mundaneness of it all, so that folks like Harambee can actually focus on delivering great service to both potential employers and employees. So we're trying to automate as much of that infrastructure, that's what we announced a lot around serverless, around containers, this idea of you don't need to worry about it. You don't have to provision the server now. You don't have to worry about patches. You don't have to worry about security. We'll take care of that for you. >> I just love your phrase proxy for competence, and I can't help but think, I've got kids in college that you know, that's the whole objective of the application, right? We've got SATs and PSATs and they take a couple data sets, but relative to the number of data sets that you describe. And I would the intimacy of those data sets, versus an ACT an SAT and a transcript. You probably have a really interesting insight, and if you can correlate to the proxies of competency, this is something that has a much greater kind of opportunity than just helping these kids that you need to help and it's really important. But that's a really interesting take, to use a much bigger data set, sophistication, great tools and infrastructure to do that mapping of competency to that job. >> Absolutely, and we're very focused on understanding, how do we use this data to provision a network for our young people to be able to describe themselves in entry? So one of the things we found in South Africa, and I'm sure it's a fairly universal problem, is that if you are unemployed, one of the things that prevents you from finding employment is you cannot access a network in which people that have jobs or describe jobs, you don't have access to that network. And so the ability to stand up and say, hey, this is who I am, these people have said, this is my profile as an individual, and say Harambee, or whoever it is, says that I am competent in these things. That gives them an in, that gives them some way of entering that network. And for instance, we've done a certain study that said that if a young lady takes just a basic CV that has a stamp on it from Harambee with a description of who they are and what their competencies are, that improves their chances of finding a job by 30%, up to 30%, and that's significant, right? And this is not us finding the job for them, this is them going out and looking for a job, so it's describing and helping this person enter that network by providing, again, a proxy for competence. >> Talk about the relationship with Google. What is Google working with you guys on? And what's next for you guys? >> Google has helped us immensely. We receive those credits, and those credits allowed us to take that first step into the cloud. They gave us a little bit of breathing room, alright, so we could take that step. We also have access to some Googlers, that have helped talk to us a little bit about ML and they have been helping us out on that. In terms of the next steps, it's 10X time. It's time to grow, it's time to use this scale, it's time to use the opportunity that we have to make the real impact that we've been searching for. >> Connect those jobs to those folks. >> Absolutely, because this is not a small problem. We've got a big problem to solve and we're really excited to be able to do it. >> I'm glad you're doing that. >> Awesome. >> It's a great, great mission. Carol, I want to get your thoughts finally, just to kind of end this segment and kind of end our time here at Google Cloud. Good opportunity for someone who's been looking at the landscape of the products. What's been the vide of the show, from your standpoint? Obviously you've been planning this for months, it's showtime, it's coming to a close, we're day three, you heard, it's going to close in 30 minutes. Are you happy? >> Yeah, I mean we're thrilled. We're thrilled. We were just talking earlier, it's been a tremendous three days of just great interaction with fantastic customers, partners, developers, it's just the level of engagement... Google Cloud is about making the Cloud available for everyone. We wanted this to be a place for people to engage, to make things, to try things, to be hands-on, to be in sessions with people like Harambee, to actually understand what the Cloud can do. And we're super excited. We've seen that in spades. The feedback has been tremendous. I hope you heard that as well. We're really excited. We believe that the capabilities we have around what we're doing in data analytics, machine learning, on top of this incredibly robust infrastructure, we really believe that there are amazing problems we can solve together. >> We had a couple of our reporters here earlier saying people who think Google is far behind is not here at the event. I got to say, give you guys some props, you guys are bringing... We know you've got great technology, everyone kind of knows that, who knows google, certainly knows the size and the scope of the great technology. But you're making it consumable. And you're thinking about the enterprise, versus we're Google. Use our great stuff because we use it. You're like Google. People aren't like Google because no one has that many servers. (laughs) Right. So it's self-awareness. This has really been a great stride you guys have shown. And the customers on stage. >> Oh, they're fantastic. >> That's the proof in the pudding. At the end of the day-- >> They're fantastic. Showing how you can actually apply it, how you can apply AI, machine learning to actually solve real world problems, that's what we were most excited about. Like you said, lots of great technology. What we want to do is connect the dots. >> And Diane Greene I thought of, my favorite soundbite was security is number one, worry, AI is the number one opportunity. >> Absolutely. >> I think if you look at it from that lens, everything falls into place. >> Absolutely. >> Well thanks for coming on, thanks for having theCUBE this week, Google. And congratulations on your great venture, and good luck with your initiative. >> Thank you very much. >> Thank you both. >> Alright that's theCUBE coverage here, live in San Francisco. I'm John Furrier, Jeff Frick, Dave Vellante went home last night. He's in our office taking care of some business. I want to thank everyone for watching. And that's a wrap here from San Francisco. Thanks for watching.
SUMMARY :
Brought to you by Google Cloud and Navid Erfani-Ghadimimi, welcome to theCUBE. Take a minute to explain your project. and match them to opportunities that we find. to these young people using cell service channels? we need to be able to describe you as a human, right? I love that term, accelerator. And I also love the term Enterprise Architect, So we have actually quite an interesting problem to solve, And it's that 15 minutes that we use putting your reputation on the line with the candidates. At the same time, you don't know so we can take young people to those jobs. and at the edge of the user experience, Thinking of that toil, They're doing the back end heavy lifting. this idea of you don't need to worry about it. but relative to the number of data sets that you describe. And so the ability to stand up and say, And what's next for you guys? it's time to use the opportunity that we have We've got a big problem to solve we're day three, you heard, it's going to close in 30 minutes. We believe that the capabilities we have I got to say, give you guys some props, At the end of the day-- What we want to do is connect the dots. And Diane Greene I thought of, I think if you look at it from that lens, and good luck with your initiative. And that's a wrap here from San Francisco.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Carol Carpenter | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Navid Erfani-Ghadimimi | PERSON | 0.99+ |
Carol | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
South Africa | LOCATION | 0.99+ |
Rwanda | LOCATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Navid Erfani-Ghadimi | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
Africa | LOCATION | 0.99+ |
35 thousand calls | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
last night | DATE | 0.98+ |
30 minutes | QUANTITY | 0.98+ |
BigQuery | TITLE | 0.98+ |
first step | QUANTITY | 0.98+ |
Google Cloud | TITLE | 0.98+ |
ORGANIZATION | 0.97+ | |
about 15 minutes | QUANTITY | 0.97+ |
JavaScript | TITLE | 0.97+ |
Harambee | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
up to 30% | QUANTITY | 0.96+ |
about 155 aspects | QUANTITY | 0.94+ |
Cloud Sequel | TITLE | 0.93+ |
day three | QUANTITY | 0.92+ |
10X | QUANTITY | 0.91+ |
Next 18 | DATE | 0.83+ |
Googlers | ORGANIZATION | 0.83+ |
10X time | QUANTITY | 0.82+ |
theCUBE | ORGANIZATION | 0.81+ |
Google Cloud | ORGANIZATION | 0.77+ |
2018 | DATE | 0.71+ |
Vice President | PERSON | 0.62+ |
Harambee | LOCATION | 0.58+ |
Cloud | TITLE | 0.55+ |
Next | DATE | 0.55+ |
GCP | TITLE | 0.51+ |
Next 2018 | DATE | 0.47+ |
CUBE | ORGANIZATION | 0.46+ |
Carol Carpenter & Navid Erfani-Ghadimimi | Google Cloud Next 2018
>> Live from San Francisco, It's the cube. Covering Google Cloud Next 2018. Brought to you by Google Cloud and it's Ecosystem partners. >> Hey, welcome back everyone. We are live in San Francisco, Cube coverage for Google Cloud Next 18. I'm John Furrier, Jeff Frick. Our next guest is Carol Carpenter, Vice President of product marketing here at Google Cloud, and Navid Erfani-Ghadimimi, welcome to the Cube. >> Thank you very much. >> Thanks for coming on. So data for good has been a topic we were just talking about here, day three What do you guys do? And what's your relationship with Google? Because big data for good is really, with cloud computing more relevant than ever before. Take a minute to explain your project. >> Sure, so in South Africa we are a social nonprofit organization. We try and connect young people that are not employed, never employed to opportunities. And we are hosted in Google Cloud, and we use GCP as our sole provider. And what we try and do is we use data to be able to understand young people, understand the facets that make a young person employable and match them to opportunities that we find. So we describe opportunities using different data points. So all those data points that we have, we store them in Cloud Sequel, and we store them in BigQuery. And then we run analytics and matching to be able to find how these young people can contribute to the economy. >> How's it going so far? >> So far it's been great. It's allowed us to think about the 10X strategies. When we were an on PRAM business, we very limited by what could provide, bricks and mortar, and now we're looking and saying, well how do we provide as much capacity and capability to these young people using cell service channels? So it really has just opened up a world of possibilities. And we're really looking at it. And we're very excited because we've taken on some initiatives in Rwanda as well. And so we're taking on a global and Africa-wide kind of strategy, which I think without Cloud we really wouldn't be able to do. >> I wonder if you could just drill down, because what are some of the data points that you look at and you measure? And is it identifying the data points and then finding the match? Or is it finding the critical ones that you really need to address as a priority to get kids to that position where they can get a job? >> I mean it's really interesting because what we talk about, we talk about proxies for competence. So if you think about when you go apply for a job, you kind of say hello, here I am and I've done this job for so many years, and that's your proxy for competence. So if you're a young person that just has a high school education and you're stepping in, we need to be able to describe you as a human, right? So for those things we look and say, what are your biographic information? What's your socialization? What kind of grit and energy do you bring to the job? So we try and measure those things and we have as many contact points as we can get to be able to understand, who is this individual, really? And use those data points, and we have about 155 aspects that we use right now, and then match them to different entry-level jobs. >> So you're the Enterprise Architect of Harambee Youth and Employment Accelerator. I love that term, accelerator. >> Yes, right. >> And I also love the term Enterprise Architect, because both are indeed of some clout. One of the themes is digital transformation, which is kind of a generic term, the analysts all talk about it. But really we're talking about the cloud mobile digital world and the power that can bring. Accelerator on the youth side, they need an app. So you're essentially providing a digital capability, not the old brick and mortar. >> That's right. >> How do you architect all of this? Because you got to assume there's an app at the edge, either a downloadable app or website, phone-- >> So we have actually quite an interesting problem to solve, because for our young people, they don't have access to apps. The majority of our young people are on feature phones, basic phones, not smart phones. And data in South Africa is very expensive. So for that young person, we need to provide as low a touch at a connection point, to our services, without making that cost them something, right? So we built a very basic Mobi site, no JavaScript, as blank as you can get. It's very boring if you look at it. >> So lightweight. >> Very lightweight. But it's the tip of an iceberg. So from there we collect certain information, but then we have an award-winning contact center that makes 35 thousand calls every month. And we engage with a young person in an up down poll for about 15 minutes. And it's that 15 minutes that we use to talk to this young person, understand about them, figure out who they are and what they are, and use that to gather our data points. We then have assessments that we run. So we run psychometric assessments, we have competence assessments, and we gather all those data points and we start understanding this young person in a way that we can go to an employer, because on that side for the employer, we need to be able to say you trust us that when we give you this young person, that we say this person will do well in your job. Well you have to have trust in us to be able to do that. So we need to provide that data to say well, this is how we came up with it. So we take quite a lot of effort in that. >> You're verifying in a way, putting your reputation on the line with the candidates. >> Yes. >> At the same time, you don't know when the inbound touch is going to happen, so you got to have all that material ready to go. >> That's right. >> That's where the big data kicks in. >> That's right. So the big data, the collection of that information, and the understanding of it... And we're on a journey to start figuring out, how can we use artificial intelligence, how can we use ML in a way that improves our accuracy, but at the same time, leaves out anything that may be biased toward these young people. So we're taking a very cautious approach to it. But it's a lot of big data. We're trying to consume it as best we can. Plus, we're trying to think about, how do we provision our services for the employers? Because again, it's a demand at business, so we want to find as many jobs as we can so we can take young people to those jobs. So extend our reach to the employers and-- >> The heavy lifting, so that they don't have to. >> Yeah, so they don't have to. >> Carol, talk about the dynamic with Google Cloud, because this is the theme we're hearing all week. You guys do the heavy lifting, and at the edge of the user experience, you take the toil out of it. The word toil has been-- >> It keeps coming up. >> It keeps coming up. Thinking of that toil, the hard work, friction out of it. In this case, the connectivity costs, being productive at that point of transaction... >> Exactly. >> They're doing the back end heavy lifting. This is kind of like a core theme across. >> That is what the promise of the Cloud is supposed to be, right? Which is to remove all that back end toil, I love that word too, the toil, the mundaneness of it all, so that folks like Harambee can actually focus on delivering great service to both potential employers and employees. So we're trying to automate as much of that infrastructure, that's what we announced a lot around serverless, around containers, this idea of you don't need to worry about it. You don't have to provision the server now. You don't have to worry about patches. You don't have to worry about security. We'll take care of that for you. >> I just love your phrase proxy for competence, and I can't help but think, I've got kids in college that you know, that's the whole objective of the application, right? We've got SATs and PSATs and they take a couple data sets, but relative to the number of data sets that you describe. And I would the intimacy of those data sets, versus an ACT an SAT and a transcript. You probably have a really interesting insight, and if you can correlate to the proxies of competency, this is something that has a much greater kind of opportunity than just helping these kids that you need to help and it's really important. But that's a really interesting take, to use a much bigger data set, sophistication, great tools and infrastructure to do that mapping of competency to that job. >> Absolutely, and we're very focused on understanding, how do we use this data to provision a network for our young people to be able to describe themselves in entry? So one of the things we found in South Africa, and I'm sure it's a fairly universal problem, is that if you are unemployed, one of the things that prevents you from finding employment is you cannot access a network in which people that have jobs or describe jobs, you don't have access to that network. And so the ability to stand up and say, hey, this is who I am, these people have said, this is my profile as an individual, and say Harambee, or whoever it is, says that I am competent in these things. That gives them an in, that gives them some way of entering that network. And for instance, we've done a certain study that said that if a young lady takes just a basic CV that has a stamp on it from Harambee with a description of who they are and what their competencies are, that improves their chances of finding a job by 30%, up to 30%, and that's significant, right? And this is not us finding the job for them, this is them going out and looking for a job, so it's describing and helping this person enter that network by providing, again, a proxy for competence. >> Talk about the relationship with Google. What is Google working with you guys on? And what's next for you guys? >> Google has helped us immensely. We receive those credits, and those credits allowed us to take that first step into the cloud. They gave us a little bit of breathing room, alright, so we could take that step. We also have access to some Googlers, that have helped talk to us a little bit about ML and they have been helping us out on that. In terms of the next steps, it's 10X time. It's time to grow, it's time to use this scale, it's time to use the opportunity that we have to make the real impact that we've been searching for. >> Connect those jobs to those folks. >> Absolutely, because this is not a small problem. We've got a big problem to solve and we're really excited to be able to do it. >> I'm glad you're doing that. >> Awesome. >> It's a great, great mission. Carol, I want to get your thoughts finally, just to kind of end this segment and kind of end our time here at Google Cloud. Good opportunity for someone who's been looking at the landscape of the products. What's been the vide of the show, from your standpoint? Obviously you've been planning this for months, it's showtime, it's coming to a close, we're day three, you heard, it's going to close in 30 minutes. Are you happy? >> Yeah, I mean we're thrilled. We're thrilled. We were just talking earlier, it's been a tremendous three days of just great interaction with fantastic customers, partners, developers, it's just the level of engagement... Google Cloud is about making the Cloud available for everyone. We wanted this to be a place for people to engage, to make things, to try things, to be hands-on, to be in sessions with people like Harambee, to actually understand what the Cloud can do. And we're super excited. We've seen that in spades. The feedback has been tremendous. I hope you heard that as well. We're really excited. We believe that the capabilities we have around what we're doing in data analytics, machine learning, on top of this incredibly robust infrastructure, we really believe that there are amazing problems we can solve together. >> We had a couple of our reporters here earlier saying people who think Google is far behind is not here at the event. I got to say, give you guys some props, you guys are bringing... We know you've got great technology, everyone kind of knows that, who knows google, certainly knows the size and the scope of the great technology. But you're making it consumable. And you're thinking about the enterprise, versus we're Google. Use our great stuff because we use it. You're like Google. People aren't like Google because no one has that many servers. (laughs) Right. So it's self-awareness. This has really been a great stride you guys have shown. And the customers on stage. >> Oh, they're fantastic. >> That's the proof in the pudding. At the end of the day-- >> They're fantastic. Showing how you can actually apply it, how you can apply AI, machine learning to actually solve real world problems, that's what we were most excited about. Like you said, lots of great technology. What we want to do is connect the dots. >> And Diane Greene I thought of, my favorite soundbite was security is number one, worry, AI is the number one opportunity. >> Absolutely. >> I think if you look at it from that lens, everything falls into place. >> Absolutely. >> Well thanks for coming on, thanks for having The Cube this week, Google. And congratulations on your great venture, and good luck with your initiative. >> Thank you very much. >> Thank you both. >> Alright that's The Cube coverage here, live in San Francisco. I'm John Furrier, Jeff Frick, Dave Vellante went home last night. He's in our office taking care of some business. I want to thank everyone for watching. And that's a wrap here from San Francisco. Thanks for watching.
SUMMARY :
Brought to you by Google Cloud and Navid Erfani-Ghadimimi, welcome to the Cube. Take a minute to explain your project. and match them to opportunities that we find. to these young people using cell service channels? we need to be able to describe you as a human, right? I love that term, accelerator. And I also love the term Enterprise Architect, So we have actually quite an interesting problem to solve, And it's that 15 minutes that we use putting your reputation on the line with the candidates. At the same time, you don't know so we can take young people to those jobs. and at the edge of the user experience, Thinking of that toil, They're doing the back end heavy lifting. this idea of you don't need to worry about it. but relative to the number of data sets that you describe. And so the ability to stand up and say, And what's next for you guys? it's time to use the opportunity that we have We've got a big problem to solve we're day three, you heard, it's going to close in 30 minutes. We believe that the capabilities we have I got to say, give you guys some props, At the end of the day-- What we want to do is connect the dots. And Diane Greene I thought of, I think if you look at it from that lens, and good luck with your initiative. And that's a wrap here from San Francisco.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Carol Carpenter | PERSON | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Navid Erfani-Ghadimimi | PERSON | 0.99+ |
Carol | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
South Africa | LOCATION | 0.99+ |
Rwanda | LOCATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Africa | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
35 thousand calls | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
30 minutes | QUANTITY | 0.98+ |
BigQuery | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
last night | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
Harambee | ORGANIZATION | 0.97+ |
JavaScript | TITLE | 0.97+ |
about 15 minutes | QUANTITY | 0.97+ |
Google Cloud | TITLE | 0.96+ |
up to 30% | QUANTITY | 0.96+ |
first step | QUANTITY | 0.95+ |
this week | DATE | 0.94+ |
about 155 aspects | QUANTITY | 0.94+ |
Cloud Sequel | TITLE | 0.93+ |
day three | QUANTITY | 0.92+ |
10X | QUANTITY | 0.91+ |
Next 18 | DATE | 0.9+ |
10X time | QUANTITY | 0.83+ |
Google Cloud | ORGANIZATION | 0.83+ |
Next | DATE | 0.76+ |
Googlers | ORGANIZATION | 0.74+ |
Vice President | PERSON | 0.61+ |
Cloud | TITLE | 0.6+ |
2018 | DATE | 0.57+ |
Next 2018 | DATE | 0.56+ |
Harambee | LOCATION | 0.52+ |
GCP | TITLE | 0.51+ |
The Cube | COMMERCIAL_ITEM | 0.43+ |
The Cube | ORGANIZATION | 0.38+ |
Storage and SDI Essentials Segment 2
>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE! Now here's your host, Stu Miniman! (bubbly music) >> I'm Stu Miniman and this is theCUBE's Boston area studio, we're talking about storage and SDI solutions. But before we get into STI and all the industry buzz, we're gonna talk a little bit about some of the real business drivers. And joining me for this segment, happy to welcome back, Randy Arseneau and Steve Kenniston, gentleman, great to see you. >> Thanks for bein' here Stu. >> Thanks Stu great to be here. >> Thank you! Alright, so, talkin' about transformation, customers are going through transformations, IBM's going through transformation, everything's going in some kind of journey. But, let's talk about, you know, it used to be IT sat on the side, Randy we talked about in the intro, you know, IT in the business, you know, wait, they actually need to talk, communicate, work together. What are some of the key drivers that you're hearing from customers? >> So, it's a good question, and we talked a little bit about it on the previous segment. But, I think what's really happening now is that a lot of the terms that our industry has kind of overused and commoditized, have sorta become devalued, right? So, they no longer really mean anything significant. Terms like agility and flexibility and IT business alignment and transformation, which we hear a million times everyday, they've become just kind of background noise, but the reality is, especially now, in this era where, you know, information and data and analytics are driving businesses, and they're no longer, you know, nice things to have for the super advanced very sophisticated companies, they're table stakes, I mean, they're needed to survive in today's global economy. So they've taken on a whole different meaning, so when we talk about agility, for instance, agility means something very specific in the context of IT business alignment, and our solution stacks in particular. Generally speaking, the kinda the way I like to think of it is, I, I overuse sports analogies, but I think this one's relevant. So, a good quarterback is able to read and react. So, as the defense is shifting and making pre-snap adjustments, the quarterback views the field, sees what's happening, and is able to very quickly develop or institute a new offensive game, play, to take advantage of that situation. So that whole read and react idea is something that's very important for a business, especially now. So businesses are under constant pressure, competitive pressure, market pressure, compliance pressure, to be able to exploit their own IP, and their own data, and their own information, very very quickly. So that's number one. By using things like integration and automation within their IT organization as opposed to the old, you know, kind of vertical method of doing things. IT organizations are now able to respond to those rapid course corrections much more effectively. Same thing with flexibility, so when an organization needs to be flexible, or wants to be flexible, to adapt to a very rapidly changing environment. Things like, and this is really where Steve's product line is particularly relevant, things like data reuse, right? So we've got organizations that are running their business on this data, which is their most important asset. We're helping them develop new and creative ways to repurpose that data, efficiently, quickly, cost-effectively, so they can expand the value. So any given piece of data can now have a multiplicative value compared to its original form. >> Yeah, I think it's actually pretty important. When you think about, we're out there talking about products, right? And a lot of vendors are doing this, right? Buy my products you'll get agility or you'll get flexibility or that sorta thing, or maybe even more importantly, in a lot of the enablement we use to educate people on, we'll say, you know, this product enables data reuse. Well what does that really mean, right? What does that mean for business, right? And, when you say okay well it makes the business more agile well, how do you do that? Then it encompasses a whole breadth of solution sets around making that data available for the user, things like software defined storage, things like particular technologies, that can do data reuse. So, it kinda boils itself down in the stack, but to Randy's point, it's been so commoditized, the words, that we don't really understand what they mean, and I think part of what we're trying to do is, make sure when we talk agility, flexibility, or even our three patterns that we talked about, modernize and transform. What do they mean to us? What do they mean to you, the user? What do they mean--? Because that's very important to connect those two. >> Yeah, and I love, 'cause for a while we used to say, it used to be well, you know, do I get it faster, better, or cheaper? Or maybe I can give you some combination, and there're certain customers you talk to and it's like look, if you can just go faster, faster, faster, that's what I need. But, it's not speed alone, like the differentiation for things like agility is, number one: we are all horrible at predicting. It's like, okay, I'm gonna buy this, I'm gonna use this for the next three to five years, and six months into it, I either greatly over or underestimated, or everything changed, we made an acquisition, competition came in. I need to be able to adjust to that, so that was, I love the sports analogy, we love sports analogies on theCUBE. >> Well, you know. >> So that, you know, if I planned for, you know, this was the plan of attack, and what do ya know, they traded for a player the day before or their star quarterback went down, and the backup, who I didn't train against, all of a sudden their offense is different and we get torn apart, because we didn't plan, we couldn't react to it, you run back at halftime and try to adjust, but, you know, you need to be able to change. >> And again, I think another, from my perspective, from and IT business alignment, another, another metaphor that works well, is, you know, kinda what I call the DevOps-ification of business, right? So what's happening now, and it's interesting I think, is that you're seeing some of the practices around DevOps and agile development, which by the way, IBM uses for our own products. You're seeing that push upstream to the business, so the business is actually adopting DevOps-like methodologies for prototyping, you know, testing hypothesis, they're doing interesting things that kinda grew out of that world. So if you think about, even 10 years ago, that would've been kind of unimaginable, you would always have the business applying pressure, and projecting it's requirements onto IT, now you're seeing much more of a collaborative approach to attacking the market, gaining competitive advantage, and succeeding financially. >> Yeah, and if people aren't really familiar with DevOps, the thing that, you know, I really like about it is, number one it's no longer, you know, we used to be on these release trains. Okay, everybody on board the 18 to 24 month release train, we're gonna plan, oh wait, we didn't get this feature in, it'll be in the next one, we'll do a patch in six or eight months, no, no, no. There's the term CICD, continuous, you know, integration, and continuous deployment. It's, you know, push. Often. You know, daily, if not hourly, if not more, and, it's like wait what about security, what about all these things? No, no, no. If we actually plan and have a culture that buys in and understands and communicates, and you've got proper automation. You know, it's a game changer, all of those things that you used to be like: ugh, I couldn't do it. Now it's like no, we can do it, so. >> The only thing constant in business these days is change. >> Absolutely. >> So, if you know that, and you have to be able to plan and articulate and be ready for change, how do you make sure that the underlying infrastructure is ready to kind of adapt to whatever request you may have of it, right? It's now alive, right, it's like a person, I wanna ask it a question, and I need it to help respond quickly. >> And a lot of the focus of this series, as we talked about in the intro section, is our software defined infrastructure portfolio, which in many ways is kinda the fabric upon which a lot of these things are being woven now, right? So, we talk about DevOps, we talk about this rapid cycle, and this continuous pace of change and adaptability, adaptation. We're delivering solutions to market that really accelerate and enable that, right, so, one of the things we wanna make sure we communicate, you know, both internally and externally is the connective tissue that exists between solutions, products, technology capabilities on the software defined infrastructure side, and how that affects the business, and how that allows the business to be more agile, to be more flexible, to transform the way it thinks about taking solutions to market, competing, opening up new markets, you know, seizing opportunities in the marketplace. >> Yeah, it's, if you think about when you talk about strategy, smart companies, they've got feedback loops, and strategy is something you revisit often and come back leads to, when you talk about modernizing an environment, I always used to, you poke fun at marketing, oh we're going to make you future ready! Well when can I be in the future? Well, the future will be soon, well, then when I get to there am I now out of date because the future's not now? So, what is modernize, what does that really mean, and, you know, how does that fit in? >> Yeah, and it's a great point, and I think, we look at modernization as kind of the the constant retooling, right? So, IT is constantly looking for ways to be more responsive, to be more agile, be more closely aligned than a lockstep with the business, and align business. And again, we're trying to deliver solutions to market that enable them to do that effectively, cost effectively, quickly, you know, get up to speed rapidly. There's another, so we talked a little bit in the intro section about the C-level survey, the study that was done globally by IBM, it's done every year, the 2018 one was introduced recently, or published recently. Another one of the themes that was very important is that it's this concept of innovate don't institutionalize and the idea is that old companies, slow moving companies, more traditional companies, have a tendency to solve a problem or introduce and implement a system of some sort and be wed to it, because they adapt all of the ancillary work flows and everything around it, to fit that model. Which may make sense the day that it implements and goes live, but it almost immediately becomes obsolete or gets phased out, so, you need to have the ability to integrate, automate, innovate, like constantly be changing and adapting. >> Yeah, I love that, actually, in the innovation communities they say you don't want best practices, you want next practices, because I always need to be able to look at how I can do, right, learn what works and share that information, but, you're right not, this is the way we're doing it, this is the way it must always be, so let it be written, so let it be done. You know, no, we need to move and adjust. >> And I think, if you think about these things as in, in the beginning of the year when IBM launched global refining was, when we launched kind of our educational context for our sellers in the beginning of the year, it was really three patterns, right? There was modernize and transform, next gen applications, and then application refactoring. And in the beginning when we started to talk about, which I think is where 90% of the clients fall into, it's this modernize and transform, right? Easy to say, but what does it really mean? So, if you break it down into that fact that we know what clients have today, right? We know, you know, VMware's big, KVM is big, you know, Sequel is big, Oracle is big, right? If that's foundationally who you're talking to on an everyday basis, how do you help them take that solution set, and, don't start refactoring today, right, but take them to a point where when they start to do the refactoring, they're well positioned to do that simply and easily, right? So it's a long journey, but to get there you really need to kind of free up and shake loose some of that, some of the bolts, so that it's a lot more flexible over here. >> Yeah, so, talking about things that are changing all the time, so tell me, transformation, it's not about an angle, it's, you know, it's about journeys and being ready, so, you know, help us close the loop on that. >> Yeah, so we talk a lot about that internally, and again, transformation is another one of those kinda buzzwords that we're, we're trying to sorta demystify it, because it can be applied in a million different ways, and they're all relevant and valid, right, so transformation is a very broadly applicable term. When we talk about transformation, we're specifically talking about kind of the structural transformation of the infrastructure itself, so how are we making the storage and the compute more cloud like, more flexible, more easily provisioned, more self-service. So there's kind of a foundational level piece at the infrastructure level. We talk about transformation at the workflow level, so things like DevOps, like continuous development and integration. How do we provide our clients with the material they need, the raw materials, whether it's software, technology, education, best practices, all of the above, to be able to implement these new ways of doing business? And then there's really transformation of the business itself, now, a couple of those, the first two, are kind of happening within IT, but they are being driven by the transformation that the business is undergoing, so, the business is constantly, again, if they're still around and they're prospering, they're constantly looking for new markets to reach into, looking for ways to compete more effectively, looking for ways to gain and sustain competitive advantage in this very very dynamic environment. So transformation touches all of those, they're all equally valid, from our perspective, specifically as IBM, we're trying to tackle the sorta foundational level, and then kinda, by using assets like this, you know, research that we do at the C-level, we're trying to kinda build the connective tissue between the ground level IT stuff, and how the business is changing. >> Think, think, I mean, really as importantly, right, we're trying to build the foundation such that as we're thinking about the business, think taxis transforming to I wanna be more Uber like, or think even automotive industries wanting to be more Uber like, right? I read an interesting article about, you know, auto manufacturers today are thinking about no more buying of cars, right? That's a transformation of my business, right? How do I do that? Now a lot of it is, you know, I gotta set up the infrastructure, I gotta set up, you know, people, and process to do some of this, but the infrastructure has to adapt as well, right? And we gotta cause, and that's not gonna happen tomorrow, to your point Stu, like I wanna design for tomorrow, then the next day, then the next day, then the future, when is the future, right? But I need to have an infrastructure that can evolve with me as my business evolves and I get to this goal. >> And the shifts are now happening, they're no longer kinda tectonic shifts, they're seismic, right, they're not gradual, incremental, I mean they are in some cases, but they're more often seismic changes, and that's a great example. Uber burst onto the scene and fundamentally changed the way humans transport themselves from one place to another. And there's a million examples of that right? There's been genomic research, and even in media and entertainment, there's lots of ways and lots of places in which this shift towards more seismic change in the industry, or in a particular use-case is happening everyday. >> Yeah, so I love your insight, when you talk about your partners, you know, the old days were great where you used to just say hey, you've got a problem, I've got a product that will solve what you need, transaction, box, done. Now, it's like, we've been saying, when are we gonna have that silver belt in security, it's like, never. It's like, security is, you know, it's a practice, or, you know, it's a general theme that you have to do, it's like DevOps isn't a product, it's something we need to do. I heard a great line it was like, you know, oh, this whole AI stuff, well can I have a box and a data scientist and I can solve this stuff? No, no, no, this is going to be an initiative, we're gonna go through lots of iterations, and there's lots of pieces, so. It's a different world today, how do you help people through this as to, you know, what the relationship is now? >> No, it's very interesting, and to your point, can I buy a box that does that, right? We were at Think this year, and our security team, or actually I think it was our blockchain team was up, and I'm very interested in blockchain and what is it gonna do for the community as we kinda grow and that sorta thing. And up on the, on a chart they put this slide that had a million different, I mean thousands of different partners that we partner with, and we also enable to kinda deliver stuff, and in some cases we're competitive, in some cases it solves security, in some cases it does this. Now all of a sudden, it's not one thing anymore, it's like how does it fit into our infrastructure, but, back to your point about partnerships. I think IBM is constantly looking to its partners because they have really that trusted value and trusted relationship with the client, and at the end of the day, as much as we can come in and say oh this box will solve your problem, we don't really know what their problems are, right? It's the people who have those relationships that know where they're going along that evolutionary scale, that we really need to work and tie in with closely, to make sure that the solutions that we deliver on the underlying side are meeting their needs, which then in tern meet our clients needs, I think that's where we're goin'. >> And actually the blockchain is a great example of kinda building these vibrant ecosystems, right? Which is something else that large companies like IBM sometimes struggle with kinda building these very dynamic, very vibrant ecosystems, but I think IBM's very good at it, and I think we've demonstrated that in a number of different places, blockchain being a recent example, but there are many others. And the STI portfolio is no different right, we've got strong partnerships across the board with other software providers, other go-to-market partners, you know, other content providers, there's a million different angles that we are able to, to introduce into the conversation. So we think all of those things taken together allow our sellers and our partners to bring a solution to their clients, regardless of their industry or their size or their particular use-case, that helps them optimize their performance in this new world of super agile, constantly changing, continuous transformation, and do so, we think, better than anyone else in the industry >> Constantly changing, distributed in nature, sounds just like the blockchain itself. (both laugh) Alright, Steve, Randy, thank you so much for helpin' us demystify some of these key business drivers that we're going to. Lots more we'll be covering, I'm Stu Miniman, thanks for watching theCUBE. (bubbly music)
SUMMARY :
and all the industry buzz, you know, it used to be IT sat on the side, and they're no longer, you know, in a lot of the enablement we use to educate people on, and there're certain customers you talk to and it's like and the backup, who I didn't train against, another metaphor that works well, is, you know, the thing that, you know, I really like about it is, The only thing constant in business and you have to be able to plan and how that allows the business and the idea is that old companies, they say you don't want best practices, and shake loose some of that, some of the bolts, it's, you know, it's about journeys and being ready, so, and how the business is changing. but the infrastructure has to adapt as well, right? and fundamentally changed the way I heard a great line it was like, you know, and at the end of the day, as much as we can come in and say and do so, we think, better than anyone else in the industry thank you so much for helpin' us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Steve | PERSON | 0.99+ |
18 | QUANTITY | 0.99+ |
Steve Kenniston | PERSON | 0.99+ |
Randy Arseneau | PERSON | 0.99+ |
Randy | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
three patterns | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
10 years ago | DATE | 0.97+ |
first two | QUANTITY | 0.97+ |
next day | DATE | 0.97+ |
24 month | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
five years | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
both | QUANTITY | 0.95+ |
three | QUANTITY | 0.95+ |
DevOps | TITLE | 0.94+ |
this year | DATE | 0.93+ |
one | QUANTITY | 0.87+ |
Think | ORGANIZATION | 0.84+ |
a million times | QUANTITY | 0.84+ |
both laugh | QUANTITY | 0.82+ |
agile | TITLE | 0.79+ |
a million | QUANTITY | 0.77+ |
one place | QUANTITY | 0.75+ |
million examples | QUANTITY | 0.68+ |
STI | ORGANIZATION | 0.64+ |
VMware | ORGANIZATION | 0.63+ |
lots of | QUANTITY | 0.53+ |
SiliconANGLE | ORGANIZATION | 0.5+ |
million | QUANTITY | 0.48+ |
SDI | TITLE | 0.47+ |
Sequel | ORGANIZATION | 0.45+ |
2 | TITLE | 0.31+ |
Pandit Prasad, IBM | DataWorks Summit 2018
>> From San Jose, in the heart of Silicon Valley, it's theCube. Covering DataWorks Summit 2018. Brought to you by Hortonworks. (upbeat music) >> Welcome back to theCUBE's live coverage of Data Works here in sunny San Jose, California. I'm your host Rebecca Knight along with my co-host James Kobielus. We're joined by Pandit Prasad. He is the analytics, projects, strategy, and management at IBM Analytics. Thanks so much for coming on the show. >> Thanks Rebecca, glad to be here. >> So, why don't you just start out by telling our viewers a little bit about what you do in terms of in relationship with the Horton Works relationship and the other parts of your job. >> Sure, as you said I am in Offering Management, which is also known as Product Management for IBM, manage the big data portfolio from an IBM perspective. I was also working with Hortonworks on developing this relationship, nurturing that relationship, so it's been a year since the Northsys partnership. We announced this partnership exactly last year at the same conference. And now it's been a year, so this year has been a journey and aligning the two portfolios together. Right, so Hortonworks had HDP HDF. IBM also had similar products, so we have for example, Big Sequel, Hortonworks has Hive, so how Hive and Big Sequel align together. IBM has a Data Science Experience, where does that come into the picture on top of HDP, so it means before this partnership if you look into the market, it has been you sell Hadoop, you sell a sequel engine, you sell Data Science. So what this year has given us is more of a solution sell. Now with this partnership we go to the customers and say here is NTN experience for you. You start with Hadoop, you put more analytics on top of it, you then bring Big Sequel for complex queries and federation visualization stories and then finally you put Data Science on top of it, so it gives you a complete NTN solution, the NTN experience for getting the value out of the data. >> Now IBM a few years back released a Watson data platform for team data science with DSX, data science experience, as one of the tools for data scientists. Is Watson data platform still the core, I call it dev ops for data science and maybe that's the wrong term, that IBM provides to market or is there sort of a broader dev ops frame work within which IBM goes to market these tools? >> Sure, Watson data platform one year ago was more of a cloud platform and it had many components of it and now we are getting a lot of components on to the (mumbles) and data science experience is one part of it, so data science experience... >> So Watson analytics as well for subject matter experts and so forth. >> Yes. And again Watson has a whole suit of side business based offerings, data science experience is more of a a particular aspect of the focus, specifically on the data science and that's been now available on PRAM and now we are building this arm from stack, so we have HDP, HDF, Big Sequel, Data Science Experience and we are working towards adding more and more to that portfolio. >> Well you have a broader reference architecture and a stack of solutions AI and power and so for more of the deep learning development. In your relationship with Hortonworks, are they reselling more of those tools into their customer base to supplement, extend what they already resell DSX or is that outside of the scope of the relationship? >> No it is all part of the relationship, these three have been the core of what we announced last year and then there are other solutions. We have the whole governance solution right, so again it goes back to the partnership HDP brings with it Atlas. IBM has a whole suite of governance portfolio including the governance catalog. How do you expand the story from being a Hadoop-centric story to an enterprise data-like story, and then now we are taking that to the cloud that's what Truata is all about. Rob Thomas came out with a blog yesterday morning talking about Truata. If you look at it is nothing but a governed data-link hosted offering, if you want to simplify it. That's one way to look at it caters to the GDPR requirements as well. >> For GDPR for the IBM Hortonworks partnership is the lead solution for GDPR compliance, is it Hortonworks Data Steward Studio or is it any number of solutions that IBM already has for data governance and curation, or is it a combination of all of that in terms of what you, as partners, propose to customers for soup to nuts GDPR compliance? Give me a sense for... >> It is a combination of all of those so it has a HDP, its has HDF, it has Big Sequel, it has Data Science Experience, it had IBM governance catalog, it has IBM data quality and it has a bunch of security products, like Gaurdium and it has some new IBM proprietary components that are very specific towards data (cough drowns out speaker) and how do you deal with the personal data and sensitive personal data as classified by GDPR. I'm supposed to query some high level information but I'm not allowed to query deep into the personal information so how do you blog those queries, how do you understand those, these are not necessarily part of Data Steward Studio. These are some of the proprietary components that are thrown into the mix by IBM. >> One of the requirements that is not often talked about under GDPR, Ricky of Formworks got in to it a little bit in his presentation, was the notion that the requirement that if you are using an UE citizen's PII to drive algorithmic outcomes, that they have the right to full transparency. It's the algorithmic decision paths that were taken. I remember IBM had a tool under the Watson brand that wraps up a narrative of that sort. Is that something that IBM still, it was called Watson Curator a few years back, is that a solution that IBM still offers, because I'm getting a sense right now that Hortonworks has a specific solution, not to say that they may not be working on it, that addresses that side of GDPR, do you know what I'm referring to there? >> I'm not aware of something from the Hortonworks side beyond the Data Steward Studio, which offers basically identification of what some of the... >> Data lineage as opposed to model lineage. It's a subtle distinction. >> It can identify some of the personal information and maybe provide a way to tag it and hence, mask it, but the Truata offering is the one that is bringing some new research assets, after GDPR guidelines became clear and then they got into they are full of how do we cater to those requirements. These are relatively new proprietary components, they are not even being productized, that's why I am calling them proprietary components that are going in to this hosting service. >> IBM's got a big portfolio so I'll understand if you guys are still working out what position. Rebecca go ahead. >> I just wanted to ask you about this new era of GDPR. The last Hortonworks conference was sort of before it came into effect and now we're in this new era. How would you say companies are reacting? Are they in the right space for it, in the sense of they're really still understand the ripple effects and how it's all going to play out? How would you describe your interactions with companies in terms of how they're dealing with these new requirements? >> They are still trying to understand the requirements and interpret the requirements coming to terms with what that really means. For example I met with a customer and they are a multi-national company. They have data centers across different geos and they asked me, I have somebody from Asia trying to query the data so that the query should go to Europe, but the query processing should not happen in Asia, the query processing all should happen in Europe, and only the output of the query should be sent back to Asia. You won't be able to think in these terms before the GDPR guidance era. >> Right, exceedingly complicated. >> Decoupling storage from processing enables those kinds of fairly complex scenarios for compliance purposes. >> It's not just about the access to data, now you are getting into where the processing happens were the results are getting displayed, so we are getting... >> Severe penalties for not doing that so your customers need to keep up. There was announcement at this show at Dataworks 2018 of an IBM Hortonwokrs solution. IBM post-analytics with with Hortonworks. I wonder if you could speak a little bit about that, Pandit, in terms of what's provided, it's a subscription service? If you could tell us what subset of IBM's analytics portfolio is hosted for Hortonwork's customers? >> Sure, was you said, it is a a hosted offering. Initially we are starting of as base offering with three products, it will have HDP, Big Sequel, IBM DB2 Big Sequel and DSX, Data Science Experience. Those are the three solutions, again as I said, it is hosted on IBM Cloud, so customers have a choice of different configurations they can choose, whether it be VMs or bare metal. I should say this is probably the only offering, as of today, that offers bare metal configuration in the cloud. >> It's geared to data scientist developers and machine-learning models will build the models and train them in IBM Cloud, but in a hosted HDP in IBM Cloud. Is that correct? >> Yeah, I would rephrase that a little bit. There are several different offerings on the cloud today and we can think about them as you said for ad-hoc or ephemeral workloads, also geared towards low cost. You think about this offering as taking your on PRAM data center experience directly onto the cloud. It is geared towards very high performance. The hardware and the software they are all configured, optimized for providing high performance, not necessarily for ad-hoc workloads, or ephemeral workloads, they are capable of handling massive workloads, on sitcky workloads, not meant for I turned this massive performance computing power for a couple of hours and then switched them off, but rather, I'm going to run these massive workloads as if it is located in my data center, that's number one. It comes with the complete set of HDP. If you think about it there are currently in the cloud you have Hive and Hbase, the sequel engines and the stories separate, security is optional, governance is optional. This comes with the whole enchilada. It has security and governance all baked in. It provides the option to use Big Sequel, because once you get on Hadoop, the next experience is I want to run complex workloads. I want to run federated queries across Hadoop as well as other data storage. How do I handle those, and then it comes with Data Science Experience also configured for best performance and integrated together. As a part of this partnership, I mentioned earlier, that we have progress towards providing this story of an NTN solution. The next steps of that are, yeah I can say that it's an NTN solution but are the product's look and feel as if they are one solution. That's what we are getting into and I have featured some of those integrations. For example Big Sequel, IBM product, we have been working on baking it very closely with HDP. It can be deployed through Morey, it is integrated with Atlas and Granger for security. We are improving the integrations with Atlas for governance. >> Say you're building a Spark machine learning model inside a DSX on HDP within IH (mumbles) IBM hosting with Hortonworks on HDP 3.0, can you then containerize that machine learning Sparks and then deploy into an edge scenario? >> Sure, first was Big Sequel, the next one was DSX. DSX is integrated with HDP as well. We can run DSX workloads on HDP before, but what we have done now is, if you want to run the DSX workloads, I want to run a Python workload, I need to have Python libraries on all the nodes that I want to deploy. Suppose you are running a big cluster, 500 cluster. I need to have Python libraries on all 500 nodes and I need to maintain the versioning of it. If I upgrade the versions then I need to go and upgrade and make sure all of them are perfectly aligned. >> In this first version will you be able build a Spark model and a Tesorflow model and containerize them and deploy them. >> Yes. >> Across a multi-cloud and orchestrate them with Kubernetes to do all that meshing, is that a capability now or planned for the future within this portfolio? >> Yeah, we have that capability demonstrated in the pedestal today, so that is a new one integration. We can run virtual, we call it virtual Python environment. DSX can containerize it and run data that's foreclosed in the HDP cluster. Now we are making use of both the data in the cluster, as well as the infrastructure of the cluster itself for running the workloads. >> In terms of the layers stacked, is also incorporating the IBM distributed deep-learning technology that you've recently announced? Which I think is highly differentiated, because deep learning is increasingly become a set of capabilities that are across a distributed mesh playing together as is they're one unified application. Is that a capability now in this solution, or will it be in the near future? DPL distributed deep learning? >> No, we have not yet. >> I know that's on the AI power platform currently, gotcha. >> It's what we'll be talking about at next year's conference. >> That's definitely on the roadmap. We are starting with the base configuration of bare metals and VM configuration, next one is, depending on how the customers react to it, definitely we're thinking about bare metal with GPUs optimized for Tensorflow workloads. >> Exciting, we'll be tuned in the coming months and years I'm sure you guys will have that. >> Pandit, thank you so much for coming on theCUBE. We appreciate it. I'm Rebecca Knight for James Kobielus. We will have, more from theCUBE's live coverage of Dataworks, just after this.
SUMMARY :
Brought to you by Hortonworks. Thanks so much for coming on the show. and the other parts of your job. and aligning the two portfolios together. and maybe that's the wrong term, getting a lot of components on to the (mumbles) and so forth. a particular aspect of the focus, and so for more of the deep learning development. No it is all part of the relationship, For GDPR for the IBM Hortonworks partnership the personal information so how do you blog One of the requirements that is not often I'm not aware of something from the Hortonworks side Data lineage as opposed to model lineage. It can identify some of the personal information if you guys are still working out what position. in the sense of they're really still understand the and interpret the requirements coming to terms kinds of fairly complex scenarios for compliance purposes. It's not just about the access to data, I wonder if you could speak a little that offers bare metal configuration in the cloud. It's geared to data scientist developers in the cloud you have Hive and Hbase, can you then containerize that machine learning Sparks on all the nodes that I want to deploy. In this first version will you be able build of the cluster itself for running the workloads. is also incorporating the IBM distributed It's what we'll be talking next one is, depending on how the customers react to it, I'm sure you guys will have that. Pandit, thank you so much for coming on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
San Jose | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Pandit | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Python | TITLE | 0.99+ |
yesterday morning | DATE | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
three solutions | QUANTITY | 0.99+ |
Ricky | PERSON | 0.99+ |
Northsys | ORGANIZATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
Pandit Prasad | PERSON | 0.99+ |
GDPR | TITLE | 0.99+ |
IBM Analytics | ORGANIZATION | 0.99+ |
first version | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one year ago | DATE | 0.98+ |
Hortonwork | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
DSX | TITLE | 0.98+ |
Formworks | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Atlas | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
Granger | ORGANIZATION | 0.97+ |
Gaurdium | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Data Steward Studio | ORGANIZATION | 0.97+ |
two portfolios | QUANTITY | 0.97+ |
Truata | ORGANIZATION | 0.96+ |
DataWorks Summit 2018 | EVENT | 0.96+ |
one solution | QUANTITY | 0.96+ |
one way | QUANTITY | 0.95+ |
next year | DATE | 0.94+ |
500 nodes | QUANTITY | 0.94+ |
NTN | ORGANIZATION | 0.93+ |
Watson | TITLE | 0.93+ |
Hortonworks | PERSON | 0.93+ |
Eric Herzog, IBM | DataWorks Summit 2018
>> Live from San Jose in the heart of Silicon Valley, it's theCUBE, covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We have with us Eric Herzog. He is the Chief Marketing Officer and VP of Global Channels at the IBM Storage Division. Thanks so much for coming on theCUBE once again, Eric. >> Well, thank you. We always love to be on theCUBE and talk to all of theCUBE analysts about various topics, data, storage, multi-cloud, all the works. >> And before the cameras were rolling, we were talking about how you might be the biggest CUBE alum in the sense of you've been on theCUBE more times than anyone else. >> I know I'm in the top five, but I may be number one, I have to check with Dave Vellante and crew and see. >> Exactly and often wearing a Hawaiian shirt. >> Yes. >> Yes, I was on theCUBE last week from CISCO Live. I was not wearing a Hawaiian shirt. And Stu and John gave me a hard time about why was not I wearing a Hawaiian shirt? So I make sure I showed up to the DataWorks show- >> Stu, Dave, get a load. >> You're in California with a tan, so it fits, it's good. >> So we were talking a little bit before the cameras were rolling and you were saying one of the points that is sort of central to your professional life is it's not just about the storage, it's about the data. So riff on that a little bit. >> Sure, so at IBM we believe everything is data driven and in fact we would argue that data is more valuable than oil or diamonds or plutonium or platinum or silver to anything else. It is the most viable asset, whether you be a global Fortune 500, whether you be a midsize company or whether you be Herzogs Bar and Grill. So data is what you use with your suppliers, with your customers, with your partners. Literally everything around your company is really built around the data so most effectively managing it and make sure, A, it's always performant because when it's not performant they go away. As you probably know, Google did a survey that one, two, after one, two they go off your website, they click somewhere else so has to be performant. Obviously in today's 365, 7 by 24 company it needs to always be resilient and reliable and it always needs to be available, otherwise if the storage goes down, guess what? Your AI doesn't work, your Cloud doesn't work, whatever workload, if you're more traditional, your Oracle, Sequel, you know SAP, none of those workloads work if you don't have a solid storage foundation underneath your data driven enterprise. >> So with that ethos in mind, talk about the products that you are launching, that you newly launched and also your product roadmap going forward. >> Sure, so for us everything really is that storage is this critical foundation for the data driven, multi Cloud enterprise. And as I've said before on theCube, all of our storage software's now Cloud-ified so if you need to automatically tier out to IBM Cloud or Amazon or Azure, we automatically will move the data placement around from one premise out to a Cloud and for certain customers who may be multi Cloud, in this case using multiple private Cloud providers, which happens due to either legal reasons or procurement reasons or geographic reasons for the larger enterprises, we can handle that as well. That's part of it, second thing is we just announced earlier today an artificial intelligence, an AI reference architecture, that incorporates a full stack from the very bottom, both servers and storage, all the way up through the top layer, then the applications on top, so we just launched that today. >> AI for storage management or AI for run a range of applications? >> Regular AI, artificial intelligence from an application perspective. So we announced that reference architecture today. Basically think of the reference architecture as your recipe, your blueprint, of how to put it all together. Some of the components are from IBM, such as Spectrum Scale and Spectrum Computing from my division, our servers from our Cloud division. Some are opensource, Tensor, Caffe, things like that. Basic gives you what the stack needs to be, and what you need to do in various AI workloads, applications and use cases. >> I believe you have distributed deep learning as an IBM capability, that's part of that stack, is that correct? >> That is part of the stack, it's like in the middle of the stack. >> Is it, correct me if I'm wrong, that's containerization of AI functionality? >> Right. >> For distributed deployment? >> Right. >> In an orchestrated Kubernetes fabric, is that correct? >> Yeah, so when you look at it from an IBM perspective, while we clearly support the virtualized world, the VM wares, the hyper V's, the KVMs and the OVMs, and we will continue to do that, we're also heavily invested in the container environment. For example, one of our other divisions, the IBM Cloud Private division, has announced a solution that's all about private Clouds, you can either get it hosted at IBM or literally buy our stack- >> Rob Thomas in fact demoed it this morning, here. >> Right, exactly. And you could create- >> At DataWorks. >> Private Cloud initiative, and there are companies that, whether it be for security purposes or whether it be for legal reasons or other reasons, don't want to use public Cloud providers, be it IBM, Amazon, Azure, Google or any of the big public Cloud providers, they want a private Cloud and IBM either A, will host it or B, with IBM Cloud Private. All of that infrastructure is built around a containerized environment. We support the older world, the virtualized world, and the newer world, the container world. In fact, our storage, allows you to have persistent storage in a container's environment, Dockers and Kubernetes, and that works on all of our block storage and that's a freebie, by the way, we don't charge for that. >> You've worked in the data storage industry for a long time, can you talk a little bit about how the marketing message has changed and evolved since you first began in this industry and in terms of what customers want to hear and what assuages their fears? >> Sure, so nobody cares about speeds and feeds, okay? Except me, because I've been doing storage for 32 years. >> And him, he might care. (laughs) >> But when you look at it, the decision makers today, the CIOs, in 32 years, including seven start ups, IBM and EMC, I've never, ever, ever, met a CIO who used to be a storage guy, ever. So, they don't care. They know that they need storage and the other infrastructure, including servers and networking, but think about it, when the app is slow, who do they blame? Usually they blame the storage guy first, secondarily they blame the server guy, thirdly they blame the networking guy. They never look to see that their code stack is improperly done. Really what you have to do is talk applications, workloads and use cases which is what the AI reference architecture does. What my team does in non AI workloads, it's all about, again, data driven, multi Cloud infrastructure. They want to know how you're going to make a new workload fast AI. How you're going to make their Cloud resilient whether it's private or hybrid. In fact, IBM storage sells a ton of technology to large public Cloud providers that do not have the initials IBM. We sell gobs of storage to other public Cloud providers, both big, medium and small. It's really all about the applications, workloads and use cases, and that's what gets people excited. You basically need a position, just like I talked about with the AI foundations, storage is the critical foundation. We happen to be, knocking on wood, let's hope there's no earthquake, since I've lived here my whole life, and I've been in earthquakes, I was in the '89 quake. Literally fell down a bunch of stairs in the '89 quake. If there's an earthquake as great as IBM storage is, or any other storage or servers, it's crushed. Boom, you're done! Okay, well you need to make sure that your infrastructure, really your data, is covered by the right infrastructure and that it's always resilient, it's always performing and is always available. And that's what IBM drives is about, that's the message, not about how many gigabytes per second in bandwidth or what's the- Not that we can't spew that stuff when we talk to the right person but in general people don't care about it. What they want to know is, "Oh that SAP workload took 30 hours and now it takes 30 minutes?" We have public references that will say that. "Oh, you mean I can use eight to ten times less storage for the same money?" Yes, and we have public references that will say that. So that's what it's really about, so storage is really more from really a speeds and feeds Nuremberger sort of thing, and now all the Nurembergers are doing AI and Caffe and TensorFlow and all of that, they're all hackers, right? It used to be storage guys who used to do that and to a lesser extent server guys and definitely networking guys. That's all shifted to the software side so you got to talk the languages. What can we do with Hortonworks? By the way we were named in Q1 of 2018 as the Hortonworks infrastructure partner of the year. We work with Hortonworks all time, at all levels, whether it be with our channel partners, whether it be with our direct end users, however the customer wants to consume, we work with Hortonworks very closely and other providers as well in that big data analytics and the AI infrastructure world, that's what we do. >> So the containerizations side of the IBM AI stack, then the containerization capabilities in Hortonworks Data Platform 3.0, can you give us a sense for how you plan to, or do you plan at IBM, to work with Hortonworks to bring these capabilities, your reference architecture, into more, or bring their environment for that matter, into more of an alignment with what you're offering? >> So we haven't an exact decision of how we're going to do it, but we interface with Hortonworks on a continual basis. >> Yeah. >> We're working to figure out what's the right solution, whether that be an integrated solution of some type, whether that be something that we do through an adjunct to our reference architecture or some reference architecture that they have but we always make sure, again, we are their partner of the year for infrastructure named in Q1, and that's because we work very tightly with Hortonworks and make sure that what we do ties out with them, hits the right applications, workloads and use cases, the big data world, the analytic world and the AI world so that we're tied off, you know, together to make sure that we deliver the right solutions to the end user because that's what matters most is what gets the end users fired up, not what gets Hortonworks or IBM fired up, it's what gets the end users fired up. >> When you're trying to get into the head space of the CIO, and get your message out there, I mean what is it, what would you say is it that keeps them up at night? What are their biggest pain points and then how do you come in and solve them? >> I'd say the number one pain point for most CIOs is application delivery, okay? Whether that be to the line of business, put it this way, let's take an old workload, okay? Let's take that SAP example, that CIO was under pressure because they were trying, in this case it was a giant retailer who was shipping stuff every night, all over the world. Well guess what? The green undershirts in the wrong size, went to Paducah, Kentucky and then one of the other stores, in Singapore, which needed those green shirts, they ended up with shoes and the reason is, they couldn't run that SAP workload in a couple hours. Now they run it in 30 minutes. It used to take 30 hours. So since they're shipping every night, you're basically missing a cycle, essentially and you're not delivering the right thing from a retail infrastructure perspective to each of their nodes, if you will, to their retail locations. So they care about what do they need to do to deliver to the business the right applications, workloads and use cases on the right timeframe and they can't go down, people get fired for that at the CIO level, right? If something goes down, the CIO is gone and obviously for certain companies that are more in the modern mode, okay? People who are delivering stuff and their primary transactional vehicle is the internet, not retail, not through partners, not through people like IBM, but their primary transactional vehicle is a website, if that website is not resilient, performant and always reliable, then guess what? They are shut down and they're not selling anything to anybody, which is to true if you're Nordstroms, right? Someone can always go into the store and buy something, right, and figure it out? Almost all old retailers have not only a connection to core but they literally have a server and storage in every retail location so if the core goes down, guess what, they can transact. In the era of the internet, you don't do that anymore. Right? If you're shipping only on the internet, you're shipping on the internet so whether it be a new workload, okay? An old workload if you're doing the whole IOT thing. For example, I know a company that I was working with, it's a giant, private mining company. They have those giant, like three story dump trucks you see on the Discovery Channel. Those things cost them a hundred million dollars, so they have five thousand sensors on every dump truck. It's a fricking dump truck but guess what, they got five thousand sensors on there so they can monitor and make sure they take proactive action because if that goes down, whether these be diamond mines or these be Uranium mines or whatever it is, it costs them hundreds of millions of dollars to have a thing go down. That's, if you will, trying to take it out of the traditional, high tech area, which we all talk about, whether it be Apple or Google, or IBM, okay great, now let's put it to some other workload. In this case, this is the use of IOT, in a big data analytics environment with AI based infrastructure, to manage dump trucks. >> I think you're talking about what's called, "digital twins" in a networked environment for materials management, supply chain management and so forth. Are those requirements growing in terms of industrial IOT requirements of that sort and how does that effect the amount of data that needs to be stored, the sophistication of the AI and the stream competing that needs to be provisioned? Can you talk to that? >> The amount of data is growing exponentially. It's growing at yottabytes and zettabytes a year now, not at just exabytes anymore. In fact, everybody on their iPhone or their laptop, I've got a 10GB phone, okay? My laptop, which happens to be a Power Book, is two terabytes of flash, on a laptop. So just imagine how much data's being generated if you're doing in a giant factory, whether you be in the warehouse space, whether you be in healthcare, whether you be in government, whether you be in the financial sector and now all those additional regulations, such as GDPR in Europe and other regulations across the world about what you have to do with your healthcare data, what you have to do with your finance data, the amount of data being stored. And then on top of it, quite honestly, from an AI big data analytics perspective, the more data you have, the more valuable it is, the more you can mine it or the more oil, it's as if the world was just oil, forget the pollution side, let's assume oil didn't cause pollution. Okay, great, then guess what? You would be using oil everywhere and you wouldn't be using solar, you'd be using oil and by the way you need more and more and more, and how much oil you have and how you control that would be the power. That right now is the power of data and if anything it's getting more and more and more. So again, you always have to be able to be resilient with that data, you always have to interact with things, like we do with Hortonworks or other application workloads. Our AI reference architecture is another perfect example of the things you need to do to provide, you know, at the base infrastructure, the right foundation. If you have the wrong foundation to a building, it falls over. Whether it be your house, a hotel, this convention center, if it had the wrong foundation, it falls over. >> Actually to follow the oil analogy just a little bit further, the more of this data you have, the more PII there is and it usually, and the more the workloads need to scale up, especially for things like data masking. >> Right. >> When you have compliance requirements like GDPR, so you want to process the data but you need to mask it first, therefore you need clusters that conceivably are optimized for high volume, highly scalable masking in real time, to drive the downstream app, to feed the downstream applications and to feed the data scientist, you know, data lakes, whatever, and so forth and so on? >> That's why you need things like Incredible Compute which IBM offers with the Power Platform. And why you need storage that, again, can scale up. >> Yeah. >> Can get as big as you need it to be, for example in our reference architecture, we use both what we call Spectrum Scale, which is a big data analytics workload performance engine, it has multiple threaded, multi tasking. In fact one of the largest banks in the world, if you happen to bank with them, your credit card fraud is being done on our stuff, okay? But at the same time we have what's called IBM Cloud Object Storage which is an object store, you want to take every one of those searches for fraud and when they find out that no one stole my MasterCard or the Visa, you still want to put it in there because then you mine it later and see patterns of how people are trying to steal stuff because it's all being done digitally anyway. You want to be able to do that. So you A, want to handle it very quickly and resiliently but then you want to be able to mine it later, as you said, mining the data. >> Or do high value anomaly detection in the moment to be able to tag the more anomalous data that you can then sift through later or maybe in the moment for realtime litigation. >> Well that's highly compute intensive, it's AI intensive and it's highly storage intensive on a performance side and then what happens is you store it all for, lets say, further analysis so you can tell people, "When you get your Am Ex card, do this and they won't steal it." Well the only way to do that, is you use AI on this ocean of data, where you're analyzing all this fraud that has happened, to look at patterns and then you tell me, as a consumer, what to do. Whether it be in the financial business, in this case the credit card business, healthcare, government, manufacturing. One of our resellers actually developed an AI based tool that can scan boxes and cans for faults on an assembly line and actually have sold it to a beer company and to a soda company that instead of people looking at the cans, like you see on the Food Channel, to pull it off, guess what? It's all automatically done. There's no people pulling the can off, "Oh, that can is damaged" and they're looking at it and by the way, sometimes they slip through. Now, using cameras and this AI based infrastructure from IBM, with our storage underneath the hood, they're able to do this. >> Great. Well Eric thank you so much for coming on theCUBE. It's always been a lot of fun talking to you. >> Great, well thank you very much. We love being on theCUBE and appreciate it and hope everyone enjoys the DataWorks conference. >> We will have more from DataWorks just after this. (techno beat music)
SUMMARY :
in the heart of Silicon He is the Chief Marketing Officer and talk to all of theCUBE analysts in the sense of you've been on theCUBE I know I'm in the top five, Exactly and often And Stu and John gave me a hard time about You're in California with and you were saying one of the points and it always needs to be available, that you are launching, for the data driven, and what you need to do of the stack, it's like in in the container environment. Rob Thomas in fact demoed it And you could create- and that's a freebie, by the Sure, so nobody cares And him, he might care. and the AI infrastructure So the containerizations So we haven't an exact decision so that we're tied off, you know, together and the reason is, they of the AI and the stream competing and by the way you need more of this data you have, And why you need storage that, again, my MasterCard or the Visa, you still want anomaly detection in the moment at the cans, like you of fun talking to you. the DataWorks conference. We will have more from
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Greene | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Jeff Hammerbacher | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mark Albertson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Colin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Tricia Wang | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Singapore | LOCATION | 0.99+ |
James Scott | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Ray Wang | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Brian Walden | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Rachel Tobik | PERSON | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Zeynep Tufekci | PERSON | 0.99+ |
Tricia | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Tom Barton | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandra Rivera | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Jennifer Lin | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Brian | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Radisys | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |
Chris Brown, Nutanix | DockerCon 2018
>> Live from San Francisco, it's theCUBE! Covering DockerCon 18, brought to you by Docker and it's ecosystem partners. >> Welcome back to theCUBE, I'm Lisa Martin with John Troyer we are live from DockerCon 2018 on a sunny day here in San Francisco at Moscone Center. Excited to welcome to theCUBE Chris Brown the Technical Marketing Manager at Nutanix, Chris welcome to theCUBE! >> Thank you so much for having me. >> So you've been with Nutanix for a couple years, so we'll talk about Nutanix and containers, you have a session control and automate your container journey with Nutanix. Talk to us about what you're gonna be talking about in the session, what's Nutanix's role in helping the customers get over this trepidation of containers? >> Yeah, definitely, and it's, it's a 20 minute session, so we've got a lot of information to cover there 'cause wanna go over a little bit about, you know, who Nutanix is from the beginning to end but, the main part I'm gonna be focusing on in that session is talking about how we, with our com product, can automate VMs and containers together and how we're moving towards being able to, you know, define you application in a blueprint and understand what you're trying to do with your application. You know, one of the things I always say is that nobody runs Sequel because they love running Sequel, they run Sequel to do something, and our goal with the com is to capture that something, what it depends on, what it relies on. Once we understand what this particular component is supposed to do in your application, we can change that, we can move that to another cloud, or we can move it to containers without losing that definition, and without losing its dependence on the other pieces of the infrastructure and exchange information back and forth. So we're talking a little bit about what we're doing today with com and where we're going with it to add Kubernetes support. >> Chris, we're sitting here in the ecosystem expo at DockerCon and your booth is busy, there's a lot of good activity. Are people coming up to you and asking, do they know Nutanix, do they understand who you are, do they just say oh you guys sell boxes? You know you're both a, you're a systems provider, you're a private cloud provider, and a hybrid-cloud provider, do people understand that, the crowd here, and what kinda conversations are you having? >> It's actually really interesting 'cause we're seeing a broad range of people, some customers are comin' up, or some people are coming up that they don't reali--they don't know that other pieces, places their company use Nutanix, but they wanted to learn more about us, so they've got some sort of initiative that you know, a lot of times it is around containers, around understanding, you know, they're starting to figure out, you know, how do we deploy this, how do we connect? You know, we've got something we wanna deploy here and there how do we do that in a scalable way? But we also have some that have no idea who we are and just comin' up like so you've got a booth and some awesome giveaways, (laughing) what do I have to do to get that, and what do you do? And you know, I really kinda summarize it as two main main groups of people that I've seen is, one of 'em is, the people who've been doing containers for forever, they know it, they've been doing it, they're very familiar with the command line, they're ret-- any gooey is too much gooey for them. And then we've got the people who are just getting started, they've kinda been told hey, containers are coming, we need to figure out how to do this, or we've got, we need to start figuring out our containers strategy. And so they're here to learn and figure out how to begin that. And so it's really interesting because those, the ones that are just getting started or just learning, we obviously help out a ton because the people who came before had to go through all the fire, all the configuration, all of the challenges, and figure out there own solutions where as we can, now we kinda come in, there's a little bit more opinionated example of how to do these things. >> So DockerCon, this year is the fifth DockerCon, they've got between five thousand and six thousand people, I was talking with John earlier and Steve Singh as well, that how I really impressed when I was leaving the general session, it was standing room only a sea of heads so they've got, obviously developers here right, sweet spot, IT folks, enterprise architects, and execs, you talked about Nutanix getting those the two polar opposite ends of the spectrum, the container lovers, the ones who are the experts, and the ones going I know I have to do this. I'm curious, what target audience are you talking to that goes hey I'm tasked with doing this, are those developers, are those IT folks, are you talking with execs as well, give us that mix. >> For the most part they are IT folks, you're artusional operators who are trying to figure out this new shift in technology and we have to talk to some developers, and it's actually been interesting to have speak with developers because you know, in general that's not, that hasn't been Nutanix's traditional audience, we've sold this product called infrastructure to develop. But developers, the few developers I've talked to have gotten really receptive and really excited about what we can do and how we can help them do their job faster by getting their IT people on board but for the most part it'd be traditional IT operators who're looking at this new technology and you know, givin' it kind of a little squinty eye, trying to figure out where it's going, because at the end of the day, with any shift in IT, there's never a time where something is completely sunset, I mean people are still using mainframes today, people will be using mainframes forever, people are just starting their virtualization journey today they're just going from bare metal to VMs, so, and then even with that shift, there's always something that gets left behind, so, they're trying to figure out how can we get used to this new container shift because at the end of the day not everything is gonna be containerized because there's just simply some things that won't be able to or they'll scope out the project and then it'll end up falling by the wayside or budget will go somewhere else. So they're trying to figure out how they can understand the container world from the world that they come from, the VM-centric world, and then, you know, it's really interesting to talk to them and show them how we're able to bring those two together and give you, not only bring the container journey up another step, but also carry your VMs along the way as well. >> Chris, Nutanix is at a, the center of several different transitions, right, both old school hardware to kind of hyper converge, but not now also kind of private hybrid-cloud to more kind of multi-cloud, hybrid-cloud. When we're not at DockerCon, so when you're out in the field, how real is multi-cloud, how real is containers in a normal enterprise? >> Definitely, so, multi-cloud is a very hot topic for sure, everyone, there's no company, no IT department that doesn't have some sort of cloud strategy or analyzing it or looking at it. The main way that we get there, or one of the core tools we have is com once again, so, and I'm obviously biased because that's my wheelhouse, right, in marketing, so I talk about that day in day out, but, with com you can add, we support today AHV and EXSi both on and off Nutanix, as well as AWS, AWS gov cloud and GCP, and Azure's coming in down the line that's where Kubernetes will come in as well, so we see a lot of people looking a this and saying hey you know, we do wanna be able to move into AWS, we do wanna be able to move into GCP and use those clouds or unify them together, and some com lets us do that. There's a couple other of prongs to that as well, one of them is Beam, Nutanix Beam, which is a product we announced at DotNext last month, which is around multi-cloud cost optimization, Beam came from an acquisition that of bought metric--the company was called milinjar, I'm probably saying that horribly wrong, but made a product called bought metric which we've rebranded and are integrating into the platform as Nutanix Beam. So what that allows you to do is, you can, it's provided as a SaaS service, so you can go use it today, there's a trial available, all that, you give it AWS credentials and it reaches out and takes a look at your billing account and says hey, we noticed that these VMs are running 50% of the time at no capacity, or they're not being used at all, you can probably cut that down shrink these and save it or hey we noticed that in general you're using this level, this baseline level, you should buy these in reserved instances to save this much per month. And it presents all that up in a really easy to use interface, and then, depending on how you wanna use it, you can even have it automatically go and resize your VMs for you, so it can say, hey you've got a T2 medium or an M2 medium running, it really would make a lot more sense as a you know M2 small. You can, it'll give you the API call, you can go make it on your own, or you can have, if you give crede-- authorization of course, it can go ahead and run that for you and just downsize those and start saving you that money, so that's another fork of that, the multi-cloud strategy. And the last one is one of the other announcements we made around last month which was around--excuse me extract for VMs, so extract is a portfolio of products, we've got extract for DBs where we can scan your sequel databases and move into ESXi or AHV, both from bare metal, or wherever the sequel databases running, extract for VMs allows us to scan the ESXi VMs, and move them over to AHV. And then, we're taking extract for VMs to the next step and being able to scan your AWS VMs and pull them on, back on-prem, if that's what you're looking for as well, so that's right now in beta and they're working on fine tuning that. Because at the end of the day, it's not just enough to view and manage, we really need to get to someplace where we can move workloads between, and put the workload in the right place. Because really with IT, it's always a balance of tools, there's never one golden bullet that solves every problem, every time a new project comes out you're trying to choose the right tool based on the expertise of the team, based on what tools are already in use, based on policy. So, we wanna be able to make sure that we have the tool sets across, that you can choose and change those choices later on, and always use the right thing for the particular application you're running. >> Choice was a big theme this morning during the general session where Docker was talking about choice agility and security. I'm curious with some of the things that were announced, you know they're talking about the only multi-cloud, multi-OS, multi-Linux, they also were talking about, they announced this federated, containerized application management saying hey, containers have always been portable but management hasn't been. I'm curious what your perspectives are on some of the of the evolution that Docker is announcing today, and how will that help Nutanix customers be able to successfully navigate this container journey? >> Definitely. And--(clears throat) you know federation's critical, being able to, container management in general is always a challenge, one of the things that I've heard time and time again is that getting are back to work for Kubernetes has always been very difficult. (laughs) And so, getting that in there, getting, that is such a basic feature that people expect, you're getting the ability to properly federate roles or federate out authentication is huge. There's a reason that SAML took the world by storm, it's that nobody wants to manage passwords, you wanna rely on some external source of truth, being able to pull that in, being able to use some cloud service and have it federated against having Docker federated against other pieces is very important there. I might've gone way off there, but whatever. (laughing) >> No, no, absolutely. >> And then, the other piece of it is that we, with a multi-cloud, with the idea of it doesn't matter whether you're running on-prem or in the cloud or, that is what people need, that's one of the true promises of containers has always been is the portability, so seeing the delivery of that is huge, and being able to provision it on-prem, on Nutanix obviously because that's who I'm here from. (laughing) but, and being able to provision to the cloud and bring those together, that's huge. >> Chris you talked about Kubernetes couple times now, obviously a big topic here, seems to be kind of emerging de facto application deployment configuration for multi-cloud. What's Nutanix doing with Kubernetes? >> Yeah, so I've definitely, Kubernetes is, it's really in many ways winning that particular battle, I mean don't get me wrong Swarm is great, and the other pieces are great, but, Kubernetes is becoming the de facto standard. One of the things we're working on is bringing containers as a service through Kubernetes, natively on Nutanix, to give you an easy way to manage, through Prism manage containers just the way you manage VMs, manage Kubernetes clusters, and you know it's, it's really important that that's, that is just one solution, because we, there's as many different Kubernetes orchestration engines as you can name, every, any name you bring in, so that's my-- >> It's like Linux, back in the day, they're a lot of different distributions or there're a lot of different ways to consume Kubernetes. >> Exactly. And so, we wanna be able to bring a opinionated way of consuming Kubernetes to the platform natively, just as a, so it's a couple of clicks away, it's very easy to do. But that's not the only way that we're doing it, we're also we do have a partnership with Docker where we're doing things like deploying Docker EE through com, or Docker, it's of course all sorts of legalese but, they're working on that so it's natively in everyone's Prism central you can just one click deploy Docker EE, we have a demo running at our booth deploying rancher using com as well, because we wanna be able to provide whatever set of infrastructure makes the most sense for the customer based on, this is what they've used in the past, this is what they're familiar with, or this is what they want. But we also want to offer an opinionated way to deliver containers as a service so that those of you that don't know, or just trying to get started, or that that's what they're looking for, this, when you've got a thousand choices to make everyone's gonna make slightly different ones. So we can't ever offer one, no one can offer the true, this is the only way to do Kubernetes, we need to offer flexibility across as well. >> One of the words we here all the time at trade shows is flexibility. So, love customer stories, as a customer marketing person, I think there's no greater brand validation you can get than the voice of the customer, and I was looking on the Docker website recently and they were saying: customers that migrate to Docker Enterprise Edition, are actually reducing costs by 50%, so, you're a marketing guy, what're some of your favorite examples of customers where Nutanix is really helping them to just kill it on their container journey? >> Yeah, so, there's a, wish I'd thought of this sooner, I shoulda. (laughing) No, but we have a, one of our customers actually, I, this always brings a smile to my face 'cause they they came and saw us last year at the booth, they're one of our existing long time customers, and they're looking to adopt Docker. They came up and we gave 'em a demo, showed them how all the pieces were doing all of the, and he's just looking at it and he's like man, I need this in my life right now, and it was mostly a demo around Docker EE, using the unified control plane, and showing off, using Nutanix drivers showing how we can back up the data and protect individual components of the containers in a very granular fashion. He's like man I need this in my life, this is incredible, and he went and grabbed his friend ran him over, and was like dude we're already using Nutanix look what they can do! And the perfect example of the two kinds of customers, this guy goes like hold on a second, jumps on the command line, like oh yeah I do this all the time from there. (laughing) >> But, that was the, that light up, the light in the eyes of the customer where they were like, this, I need to be able to see this, to be able to use this, and be able to integrate this, that's, I will not forget that anytime soon. That's really why I think we're going down a very good path there, because the ability to, when you have these tinkerers, the people who are really good at code, I mean I spend a lot of time on the command line myself even though I'm in marketing, so, I don't know what I'm doing there, Powerpoints maybe? (laughing) Just because I can understand it from the command line or an expert can understand it, doesn't mean you can share that. I've been tryin' to hand off some of the gear that I manage off to another person, and was like oh you just type out all these commands, and they're like I have no idea what's going on here. (laughing) And so, seeing the customers be able to, to understand what they're more in depth coworkers have done in a gooey fashion, that's just really, that makes a lot of sense to me and it's, I like that a lot. >> It's great. >> Are you seeing any, and the last question is, as we wrap up, some of the, one of the stats actually that was mentioned in the Docker press release this morning about the new announcements was, 85% of enterprise organizations have multi-cloud, and then we were talking with Scott Johnston, their Chief Product Officer, that said, upwards of 90% of IT budgets are spent on keeping the lights on for existing applications, so, there's a lot of need there for enterprises to go this road. I'm wondering, are you seeing at Nutanix, any particular industries that are really leading edge here saying hey we have a lot of money that we're not able to use for innovation, are you seeing that in any specific industries, or is it kinda horizontal? >> I, to be honest, I've seen it kind of horizontally, I mean I've had, I've spoken to many different customers, mostly around com because, but, and they come from all different walks of life. I've seen, I've talked to customers from sled, who've been really excited about their ability to start better doing hadoop, because they do thousands of hadoop clusters a year for their researchers. I've talked to, you know in the cloud or on-prem, or across. I've talked to people in governments, I've talked to people in hospitals and, you know, all sorts of-- >> I can imagine oil and gas, some of those industries that have a ton of data. >> Yeah and it's actually, the oil and gas is really fascinating because a lot of times they, for in a rig, they wanna be able to use compute, but they can't exactly get to a cloud, so how do you, how do you innovate there and on the edge, without, how do you make a change in the core without making it on the edge, and how do you bring those together? So it's, there's really a lot of really fascinating things happening around that, but, I haven't noticed any one industry in particular it's, it's across, it's that everyone is, but then again, by the time they get to me, it's probably self selected. (laughing) But it's across horizontally, is that everyone is looking at how can we use this vast storage, I just found out this is already being used in my environment because it's super easy, how do I, how do I keep a job? (chuckles) Or how do I adopt this and free up my investments in keeping the lights on into innovation, how do I save time, how do I-- Because one of the things that I've noticed with all of this cloud adoption or container adoption all of that is that many times a customer will start making this push, not always from a low level, maybe from a high level, but, they start making this push because they hear it's faster and better and that it'll just solve all their problems if they just start using this. And, because they rush into they don't often they don't solve the fundamental problems that gave 'em the issue to begin with, and so they're just hoping that this new technology fixes it. So, now there's, I am seeing some customers shift back and say hey, I do wanna adopt that, but I need to do it in a smart way, 'cause we just ran to it and that caused us problems. >> Well it sounds like with all the momentum, John, that we've heard in the keynote, the general session this morning, and with some of the guests, you know, I think even Steve Singh was saying only about half of the audience is actually using containers so it's sounds like, with what you're talking about, with what we've heard consistently today, it's sort of the tip of the iceberg, so lots of opportunity. Chris thank you so much for stopping by theCUBE and sharing with us all the exciting things that are going on at Nutanix with containers and more. >> Thank you so much for having me, it was a lot of fun. >> And we wanna thank you for watching theCUBE, Lisa Martin with John Troyer, from DockerCon 2018 stick around we will be right back with our next guest. (bubbly music)
SUMMARY :
brought to you by Docker the Technical Marketing about in the session, move that to another cloud, they understand who you are, they're starting to figure out, you know, and the ones going I and it's actually been interesting to have the center of several and Azure's coming in down the line of the evolution that one of the things that I've heard and being able to provision it on-prem, seems to be kind of emerging de facto just the way you manage VMs, back in the day, they're a or that that's what customers that migrate to and they're looking to adopt Docker. and was like oh you just and the last question is, as we wrap up, and they come from all that have a ton of data. that gave 'em the issue to begin with, and with some of the guests, you know, Thank you so much for we will be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Steve Singh | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
John Troyer | PERSON | 0.99+ |
Chris Brown | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
20 minute | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Scott Johnston | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
85% | QUANTITY | 0.99+ |
ESXi | TITLE | 0.99+ |
two kinds | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
one solution | QUANTITY | 0.99+ |
DockerCon 2018 | EVENT | 0.98+ |
two main main groups | QUANTITY | 0.98+ |
five thousand | QUANTITY | 0.98+ |
Moscone Center | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
thousands | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Docker | ORGANIZATION | 0.98+ |
DockerCon | EVENT | 0.98+ |
both | QUANTITY | 0.97+ |
Docker EE | TITLE | 0.97+ |
six thousand people | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
Linux | TITLE | 0.96+ |
Kubernetes | TITLE | 0.96+ |
fifth | QUANTITY | 0.95+ |
DockerCon | ORGANIZATION | 0.95+ |
this year | DATE | 0.93+ |
T2 medium | COMMERCIAL_ITEM | 0.93+ |
DotNext | ORGANIZATION | 0.92+ |
two polar | QUANTITY | 0.91+ |