Image Title

Search Results for ESRI:

Jay Theodore & David Cardella, Esri | AWS re:Invent 2021


 

(upbeat music) >> Okay, we're back at AWS re:Invent 2021. You're watching theCUBE. My name is Dave Vellante, and we're here with Jay Theodore who's the CTO of Enterprise and AI at Esri and he's joined by Dave Cardella, who's the Principal Product Manager for Developer Technologies also at Esri. Guys, thanks for coming on. Welcome to theCUBE. >> Thanks, Dave. >> Thanks, Dave. >> Jay, maybe you could give us a little background on Esri. What do you guys do? What are you all about? >> Sure. Esri is an old timer, we are a 50-year old software company. We are the pioneers in GIS and the world leader in GIS - geographic information system. We build geospatial infrastructure that's built for the cloud, built for the edge, built for the field also, you can say. So, we do mapping and analytics. We help our customers solve very complex challenges by bringing location intelligence into the mix. Our customers sort of like run the world, transform the world and we sort of like empower them with the technology we have. So, that's what we do. >> The original edge, and now of course, AWS is coming to you. >> Yeah. (both interviewees chuckling) >> Who are your customers, your main customers? Maybe share that. >> Yeah. We've got over 350,000 customers in... (Dave Cardella chuckling) Yeah. We're all- >> Dave Vellante: Scale. >> Yeah. (Dave Vellante laughing) In the public sector, especially, commercial businesses, non-profit organizations, and that really represents tens of millions of users globally. >> So, let's talk a little bit more about how things are changing. As they say, the edge is coming to you. Maybe AI, you know, 50 years ago... Actually, 50 years ago is probably a lot of talk about AI. When I came into the business, you know, it was a lot of chatter about it. But now, it's real. All this data that we have and the compute power, the cost is coming down. So, AI is in your title? >> Jay: Yes. >> Tell us more about that. >> I think that AI's come to age. When I went to grad school, AI was still in theory because we didn't have the compute and of course we didn't have all the data that was collected, right? Now, there's a lot of observation data coming in through IOT and many senses and so on. So, what do you do with that? Like, human interpretation is pretty challenged, I would say. So, that's where AI comes in, to augment the intelligence that we have in terms of extracting information. So, geospatial AI, specifically which we focus on, is to try to take location that's embedded with this kind of information and sort of like extract knowledge and information out of them, right? Intelligence out of them. So, that's what we focus on: to compliment location intelligence with AI, which we call geospatial AI. >> So, you can observe how things are changing, maybe report on that and that's got to be a huge thing that we can talk about. So, maybe talk about some of the big trends that are driving your business. What are those? >> Yeah, that's a great question. So, I was listening to Sandy Carter's 'Keynote' yesterday and she really emphasized the importance of data. And, data is crucial to what we do as a technology company, and we curate data globally and we get our data from best of breed sources, and that includes commercial data providers, it includes natural mapping agencies, and also a community maps program where we get data from our customers, from our global network of distributors and partners, and we take that data, we curate it, we host it and we deliver it back. And so, just recently for example, we're really excited 'cause we released the 2020 Global Land Cover. And so, Esri is the first company to release this data at 10 meter resolution for the entire planet, and it's made up of well over 400,000 earth observations from various satellites. So, you know data is a... It's not only a nice-to-have anymore, it's actually a must-have. And so, so is location when we talk about data. They go hand in hand. >> 10 meters so I can look at the hole in the roof of my barn... >> Well... (Dave Cardella chuckling) >> Dave Vellante: Pretty much. >> It depends on what you're trying to do, right? So, I think you know, to talk about it, it's within context. GIS is all about context, right? It's bringing location into context in your decision-making process. It's sort of like the where along with the when, what, how and why. That's what GIS brings in. So, a lot of problems are challenging because we need to bring these things together. It's sort of like you're tearing various layers of data that you have and then bringing them within context. Very often, the context that human minds understand and reflected in the physical world is geographic location, right? So, that's what you bring in. And I would say that there's various kinds of data, also. Various types of data, formats of data: structured data, unstructured data, data captured from extraterrestrial, you know, like, you can say, satellite imagery from drones, from IOT. So, it's like on the ground, above the ground, under the ground. All these sensors are bringing in data, right? So, what GIS does is try of map that data to a place on the earth at very high precision, if you're looking at it locally, or at a certain position if it's regionally, trying to find patterns, trying to understand what's emerging, and then, as you take this and infuse geospatial layer into this, you can even predict what is going to happen based on the past. So, that's sort of like... You could say GIS being used for real world problems, like if you take some examples, COVID... The pandemic is one example. Being able to first discover where it happened, where it's spreading, you know, that's the tracking aspect and then how you respond to that and then how you recover, you know, recovering as humans, as businesses and so on. So, we have widespread use of that. The most popular would be the John Hopkins' Dashboard, >> Dave Vellante: Board, yeah. >> that everyone's seen. >> Vellante: We all use it... >> It's gotten trillions of hits and so on, right? That's one example. Another example is addressing racial equity by using location information. Similarly, social justice. Now, these are all problems that we face today, right? So, GIS is extensively used by our customers to solve such problems. And then of course, you have the climate change challenge itself, right? Where you're hovering all kinds of complex data that we can't comprehend because you have to go back decades and try to bring all that together to compute. So, all of this together comes in the form of a geospatial cloud that we have as an offering. >> So, okay. That's amazing. I mean, you're building a super cloud, we call it. You know, and... So, how do you deal with... How do you work with AWS? What's the relationship there? Where do developers fit in? Maybe you can talk about a little bit. >> Yeah. Yeah, that's a great question. So, we've got two main integration points with AWS. A lot of our location services that we expose data and capabilities through are built on AWS. So, we use storage, we use cloud caching and AWS's various data sets across the world quite heavily. So, that's one integration point. The other is a relatively new product that Amazon has released called Amazon Location Service. And so, what it does is it brings location and spatial intelligence directly into a developer's AWS dashboard. So, the experience that they're already used to, they now get the power of Esri services and location intelligence right at their fingertips. >> So, you're .. We started talking about the edge, your data architecture is very distributed, right? But, of course, you're bringing it back. So, how does that all work? You process it locally and then sending some data back? Are you sending all data back? What does that flow look like? >> I think the key thing is that our customers work with data of all kinds, all formats, all sizes and some are in real-time, some are big data and archive, right? So, most recently, just to illustrate that point, this year, we released RGS Enterprise on Kubernetes. It's the entire geospatial cloud made available for enterprise customers, and that's made available on AWS, on EKS. Now, when it's available on EKS, that means all these capabilities are microservices, so, they can be massively scaled. They're DevOps friendly and you've got the full mapping and analytics system that's made available for this. >> Dave Vellante: Oh. >> And we sort of like built it, you know, cloud native from the ground up and the more important thing that we have now is connectivity with Redshift. Why is that important? Because a lot of our customers have geospatial data in these cloud data warehouses. Redshift is very important for them. And so, you can connect to that, you can discover these massive petabytes of data sets and then you can set up what we call the query layer. It's basically pushing analytics into Redshift and being able to bring out that data for mapping, visualization, for AI workflows and so on. It's pretty amazing and it's pretty exciting at this time. >> And, I mean... So much data. And then... What, do you tear it down into glacier of just to save some cost, or is it going to all stay in S3 or is it... >> So, we already work with S3, we've worked with RDS, we support Amazon Aurora, our customers are very happy with that. So, Redshift is a new offering for us to connect to Redshift. >> Dave Vellante: AOK. >> So, the way the query layer works is all of your observation data is in Redshift, your other kinds of data... Your authoritative data sets could come from various other sources including in Amazon Aurora, for example, okay? And then, you overlay them and use them. Now, the data in Redshift is usually massive, so, when you run the analytic query, we let you cache that as a materialized view or as a snapshot that you can refresh and you can work against that. This is really good because it compliments our ability to actually take that data, to put it on a map image which we render service side, it's got very complex cartographic ready symbology and rendering and everything in there. And you get these beautiful rendering of maps that comes out of Redshift data. >> And you're pushing AI throughout your stack, is that, you know? >> Yeah. AI is just like infused, right? I mean, it's... I would say, human intelligence augmented for data scientists, for everyone, you know. Whether you're using it through notebooks or whether you're using it through applications that we have or the developer APIs themselves. >> So, what are some of the big initiatives you're working on near-term, mid-term? >> Yeah. So, you mentioned what's really driving innovation and it's related to the question that you just asked right now and I really believe developers drive innovation. They're force multipliers in the solutions that they build. And so, that's really the integration point that Esri has with AWS, it's developers. And earlier this year, we released the RGS platform which is our platform as a service offering that exposes these powerful location services that Jay just explained. There's a set of on-demand services that developers can bring in their applications as they want and they can bring in one, they can bring in two or three, whatever they need, but they're there when they need them. And also, developers have their client API of choice. So, we have our own client APIs that we offer but you're not pigeonholed into that when you're working with RGS platform. A developer can bring their own API. >> Okay, so he called the platform as a service. Are you making your data available as well? Your data, your tooling and then selling that as a service? >> Our data has always been available as a service, I would say. >> Okay, yeah. >> Everything that we do, our GIS tools, are accessible as a web service. >> Vellante: Is that new, or... that's always been the way? >> No, that's always been there. That's always been that way. The difference now is everything is built from the ground up to be cloud native. >> Dave Vellante: Okay. >> From the ground up to be connected to every data set that's available on AWS, every compute that can be exploited from small to massive in terms of compute, and also reaching out to bring all the apps and the developer experiences, pushing out to customers. >> So, 50 years ago, you weren't obviously using the cloud, but so, you were running everything on-prem now you're all in the cloud, or you're kind of got a mix? What is the clearer picture of that? >> So, we have two major offerings. There's RGS Online, where obviously it's offered as a service and it's GIS as a service provider for everyone. And that's available everywhere. The other offering we have is actually RGS Enterprise where some customers run them on premises, some run it in the cloud, especially AWS. Many run it on the edge, some in the field and there's connectivity between this. A lot of our customers are hybrid. So, they make the best of both. Depending on the kinds of data- >> Dave Vellante: You give them a choice. >> the kinds of workflows... Giving them the choice, exactly. And I would say, you know, taking Werner's 'Keynote' this morning, he talked about what's the next frontier, right? The next frontier could very well be when AWS gets to space and makes compute available there. It's sitting alongside the data that's captured and we've always, like I said, for 50 years, worked with satellite imagery, >> Dave Vellante: Yeah. >> or worked with IOT, or worked with drone data. It's just getting GIS closer to where the data is. >> So, the ultimate edge space. >> Yes. >> All right, I'll give you guys... Give us a quick wrap if you would. Final thoughts. >> I think its... Go ahead. >> Go, ahead Dave. >> Yeah. I really resonate with data and content. We're a technology company- there's no doubt about that- but without good data, not only supplied by ourselves, but our customers, Jay mentioned it earlier, our customers bring their own data to our platform and that's really what drives the analytics and the accuracy in the answers to the problems that people are trying to solve. >> Bring their first-party data with your data and then one plus one is... >> Yes. Yeah, and the key thing about that (Cardella chuckling) is not some of the data, it's all of the data that you have. You don't more need to be constrained. >> Yeah, you're not sampling. >> Yes, exactly. >> Yeah. >> All right, guys. Thanks so much. Really interesting story. Congratulations. >> Thank you, Dave. >> Dave, thank you. >> Nice meeting you. >> Thank you for watching. This is Dave Vellante for theCUBE, the leader in global tech coverage. We'll be right back. (upbeat music)

Published Date : Dec 2 2021

SUMMARY :

and we're here with Jay Theodore What are you all about? built for the field also, you can say. AWS is coming to you. Yeah. Who are your customers, Yeah. and that really represents When I came into the business, you know, and of course we didn't have all the data So, you can observe So, you know data is a... 10 meters so I can look at the hole in (Dave Cardella chuckling) So, that's what you bring in. And then of course, you have So, how do you deal with... So, the experience that So, how does that all work? and that's made available on AWS, on EKS. and then you can set up what What, do you tear it down into glacier So, we already work with S3, and you can work against that. or the developer APIs themselves. and it's related to the question Okay, so he called the I would say. Everything that we do, our GIS tools, that's always been the way? everything is built from the ground up and the developer experiences, So, we have two major offerings. And I would say, you know, closer to where the data is. All right, I'll give you guys... I think its... and the accuracy in the answers and then one plus one is... it's all of the data that you have. Thanks so much. the leader in global tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Jay TheodorePERSON

0.99+

Dave CardellaPERSON

0.99+

JayPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

CardellaPERSON

0.99+

AWSORGANIZATION

0.99+

Sandy CarterPERSON

0.99+

VellantePERSON

0.99+

twoQUANTITY

0.99+

EsriORGANIZATION

0.99+

David CardellaPERSON

0.99+

10 meterQUANTITY

0.99+

WernerPERSON

0.99+

50 yearsQUANTITY

0.99+

threeQUANTITY

0.99+

tens of millionsQUANTITY

0.99+

10 metersQUANTITY

0.99+

RedshiftTITLE

0.99+

yesterdayDATE

0.99+

50 years agoDATE

0.99+

bothQUANTITY

0.98+

50-year oldQUANTITY

0.98+

one exampleQUANTITY

0.98+

oneQUANTITY

0.98+

two major offeringsQUANTITY

0.98+

over 350,000 customersQUANTITY

0.98+

first companyQUANTITY

0.97+

this yearDATE

0.97+

firstQUANTITY

0.97+

EsriPERSON

0.96+

todayDATE

0.95+

earlier this yearDATE

0.94+

one integration pointQUANTITY

0.93+

earthLOCATION

0.93+

two main integration pointsQUANTITY

0.91+

ServiceTITLE

0.9+

observationsQUANTITY

0.89+

KeynoteTITLE

0.88+

trillions of hitsQUANTITY

0.88+

usersQUANTITY

0.86+

IOTORGANIZATION

0.85+

RGS EnterpriseTITLE

0.85+

pandemicEVENT

0.84+

KubernetesTITLE

0.82+

re:Invent 2021EVENT

0.81+

EKSTITLE

0.79+

RGSTITLE

0.79+

Barak Schoster, Palo Alto Networks | CUBE Conversation 2022


 

>>Hello, everyone. Welcome to this cube conversation. I'm here in Palo Alto, California. I'm John furrier, host of the cube, and we have a great guest here. Barack Shuster. Who's in Tel-Aviv senior director of chief architect at bridge crew, a part of Palo Alto networks. He was formerly the co-founder of the company, then sold to Palo Alto networks Brock. Thanks for coming on this cube conversation. >>Thanks John. Great to be here. >>So one of the things I love about open source, and you're seeing a lot more of the trend now that talking about, you know, people doing incubators all over the world, having open source and having a builder, people who are starting companies, it's coming more and more, you you're one of them. And you've been part of this security open source cloud infrastructure infrastructure as code going back a while, and you guys had a lot of success. Now, open source infrastructure as code has moved up to the stack, certainly lot going down at the network layer, but developers just want to build security from day one, right? They don't want to have to get into the, the, the waiting game of slowing down their pipelining of code in the CIC D they want to move faster. And this has been one of the core conversations this year is how to make developers more productive and not just a cliche, but actually more productive and not have to wait to implement cloud native. Right. So you're in the middle of it. And you've got you're in, tell us, tell us what you guys are dealing with that, >>Right? Yeah. So I hear these needles working fast, having a large velocity of releases from many of my friends, the SRAs, the DevOps, and the security practitioners in different companies. And the thing that we asked ourselves three years ago was how can we simplify the process and make the security teams an enabler instead of a gatekeeper that blocks the releases? And the thing that we've done, then we understood that we should do is not only doing runtime scanning of the cloud infrastructure and the cloud native clusters, but also shift left the findings and fixings the remediation of security issues to the level of the code. So we started doing infrastructure is good. We Terraform Kubernetes manifests cloud formation, server less, and the list goes on and we created an open source product around it, named checkup, which has an amazing community of hundreds of contributors. Not all of them are Palo Alto employees. Most of them are community users from various companies. And we tried to and succeeded to the democratic side is the creation of policy as code the ability to inspect your infrastructure as code and tell you, Hey, this is the best practice that you should use consider using it before applying a misconfigured S3 bucket into production, or before applying a misconfigured Kubernetes cluster into your production or dev environment. And the goal, >>The goal, >>The goal is to do that from the ID from the moment that you write code and also to inspect your configuration in CGI and CD and in runtime. And also understand that if there is any drift out there and the ability to fix that in the source code, in the blueprint itself. >>So what I hear you saying is really two problems you're solving. One is the organizational policies around how things were done in a environment before all the old way. You know, the security teams do a review, you send a ticket, things are waiting, stop, wait, hurry up and wait kind of thing. And then there's the technical piece of it, right? Is that there's two pieces to that. >>Yeah, I think that one thing is the change of the methodologies. We understood that we should just work differently than what we used to do. Tickets are slow. They have priorities. You have a bottleneck, which is a small team of security practitioners. And honestly, a lot of the work is repetitive and can be democratized into the engineering teams. They should be able to understand, Hey, I wrote the code piece that provision this instance, I am the most suitable person as a developer to fix that piece of code and reapply it to the runtime environment. >>And then it also sets the table for our automation. It sets the table for policies, things that make things more efficient scaling. Cause you mentioned SRS are a big part of this to dev ops and SRE. Those, those folks are, are trying to move as fast as possible at scale, huge scale challenge. How does that impact the scale piece become into here? >>So both themes Esri's and security teams are about a link to deploying application, but new application releases into the production environment. And the thing that you can do is you can inspect all kinds of best practices, not only security, best practices, but also make sure that you have provision concurrencies on your serverless functions or the amount of auto-scaling groups is what you expect it to be. And you can scan all of those things in the level of your code before applying it to production. >>That's awesome. So good, good benefits scales a security team. It sounds like too as well. You could get that policy out there. So great stuff. I want to really quickly ask you about the event. You're hosting code two cloud summit. What are we going to see there? I'm going to host a panel. Of course, I'm looking forward to that as well. You get a lot of experts coming in there. Why are you having this event and what topics will be covered? >>So we wanted to talk on all of the shifts, left movement and all of the changes that have happened in the cloud security market since inception till today. And we brought in great people and great practitioners from both the dev ops side, the chaos engineering and the security practitioners, and everybody are having their opinion on what's the current status state, how things should be implemented in a mature environment and what the future might hold for the code and cloud security markets. The thing that we're going to focus on is all of the supply chain from securing the CCD itself, making sure your actions are not vulnerable to a shut injection or making sure your version control system are configured correctly with single sign-on MFA and having branch protection rules, but also open source security like SCA software composition analysis infrastructure as code security. Obviously Ron thinks security drifts and Kubernetes security. So we're going to talk on all of those different aspects and how each and every team is mitigating. The different risks that come with. >>You know, one of the things that you bring up when you hear you talking is that's the range of, of infrastructure as code. How has infrastructure as code changed? Cause you're, you know, there's dev ops and SRS now application developers, you still have to have programmable infrastructure. I mean, if infrastructure code is real realize up and down the stack, all aspects need to be programmable, which means you got to have the data, you got to have the ability to automate. How would you summarize kind of the state of infrastructure as code? >>So a few years ago, we started with physical servers where you carried the infrastructure on our back. I, I mounted them on the rack myself a few years ago and connected all of the different cables then came the revolution of BMS. We didn't do that anymore. We had one beefy appliance and we had 60 virtual servers running on one appliance. So we didn't have to carry new servers every time into the data center then came the cloud and made everything API first. And they bill and enabled us to write the best scripts to provision those resources. But it was not enough because he wanted to have a reproducible environment. The is written either in declarative language like Terraform or CloudFormation or imperative like CDK or polluted, but having a consistent way to deploy your application to multiple environments. And the stage after that is having some kind of a service catalog that will allow application developer to get the new releases up and running. >>And the way that it has evolved mass adoption of infrastructure as code is already happening. But that introduces the ability for velocity in deployment, but also new kinds of risks that we haven't thought about before as security practitioners, for example, you should vet all of the open source Terraform modules that you're using because you might have a leakage. Our form has a lot of access to secrets in your environment. And the state really contains sensitive objects like passwords. The other thing that has changed is we today we rely a lot on cloud infrastructure and on the past year we've seen the law for shell attack, for example, and also cloud providers have disclosed that they were vulnerable to log for shell attack. So we understand today that when we talk about cloud security, it's not only about the infrastructure itself, but it's also about is the infrastructure that we're using is using an open source package that is vulnerable. Are we using an open source package that is vulnerable, is our development pipeline is configured and the list goes on. So it's really a new approach of analyzing the entire software bill of material also called Asbell and understanding the different risks there. >>You know, I think this is a really great point and great insight because new opera, new solutions for new problems are new opportunities, right? So open source growth has been phenomenal. And you mentioned some of those Terraform and one of the projects and you started one checkoff, they're all good, but there's some holes in there and it's open source, it's free, everyone's building on it. So, you know, you have, and that's what it's for. And I think now is open source goes to the next level again, another generational inflection point it's it's, there's more contributors there's companies are involved. People are using it more. It becomes a really strong integration opportunity. So, so it's all free and it's how you use it. So this is a new kind of extension of how open source is used. And if you can factor in some of the things like, like threat vectors, you have to know the code. >>So there's no way to know it all. So you guys are scanning it doing things, but it's also huge system. It's not just one piece of code. You talking about cloud is becoming an operating system. It's a distributed computing environment, so whole new area of problem space to solve. So I love that. Love that piece. Where are you guys at on this now? How do you feel in terms of where you are in the progress bar of the solution? Because the supply chain is usually a hardware concept. People can relate to, but when you bring in software, how you source software is like sourcing a chip or, or a piece of hardware, you got to watch where it came from and you gotta track track that. So, or scan it and validate it, right? So these are new, new things. Where are we with? >>So you're, you're you're right. We have a lot of moving parts. And really the supply chain terms of came from the automobile industry. You have a car, you have an engine engine might be created by a different vendor. You have the wheels, they might be created by a different vendor. So when you buy your next Chevy or Ford, you might have a wheels from continental or other than the first. And actually software is very similar. When we build software, we host it on a cloud provider like AWS, GCP, Azure, not on our own infrastructure anymore. And when we're building software, we're using open-source packages that are maintained in the other half of the war. And we don't always know in person, the people who've created that piece. And we do not have a vetting process, even a human vetting process on these, everything that we've created was really made by us or by a trusted source. >>And this is where we come in. We help you empower you, the engineer, we tools to analyze all of the dependency tree of your software, bill of materials. We will scan your infrastructure code, your application packages that you're using from package managers like NPM or PI. And we scan those open source dependencies. We would verify that your CIC is secure. Your version control system is secure. And the thing that we will always focus on is making a fixed accessible to you. So let's say that you're using a misconfigured backup. We have a bot that will fix the code for you. And let's say that you have a, a vulnerable open-source package and it was fixed in a later version. We will bump the version for you to make your code secure. And we will also have the same process on your run time environment. So we will understand that your environment is secure from code to cloud, or if there are any three out there that your engineering team should look at, >>That's a great service. And I think this is cutting edge from a technology perspective. What's what are some of the new cloud native technologies that you see in emerging fast, that's getting traction and ultimately having a product market fit in, in this area because I've seen Cooper. And you mentioned Kubernetes, that's one of the areas that have a lot more work to do or being worked on now that customers are paying attention to. >>Yeah, so definitely Kubernetes is, has started in growth companies and now it's existing every fortune 100 companies. So you can find anything, every large growler scale organization and also serverless functions are, are getting into a higher adoption rate. I think that the thing that we seeing the most massive adoption off is actually infrastructure as code during COVID. A lot of organization went through a digital transformation and in that process, they have started to work remotely and have agreed on migrating to a new infrastructure, not the data center, but the cloud provider. So at other teams that were not experienced with those clouds are now getting familiar with it and getting exposed to new capabilities. And with that also new risks. >>Well, great stuff. Great to chat with you. I want to ask you while you're here, you mentioned depth infrastructure as code for the folks that get it right. There's some significant benefits. We don't get it. Right. We know what that looks like. What are some of the benefits that can you share personally, or for the folks watching out there, if you get it for sure. Cause code, right? What does the future look like? What does success look like? What's that path look like when you get it right versus not doing it or getting it wrong? >>I think that every engineer dream is wanting to be impactful, to work fast and learn new things and not to get a PagerDuty on a Friday night. So if you get infrastructure ride, you have a process where everything is declarative and is peer reviewed both by you and automated frameworks like bridge and checkoff. And also you have the ability to understand that, Hey, once I re I read it once, and from that point forward, it's reproducible and it also have a status. So only changes will be applied and it will enable myself and my team to work faster and collaborate in a better way on the cloud infrastructure. Let's say that you'd done doing infrastructure as code. You have one resource change by one team member and another resource change by another team member. And the different dependencies between those resources are getting fragmented and broken. You cannot change your database without your application being aware of that. You cannot change your load Bonser without the obligation being aware of that. So infrastructure skullduggery enables you to do those changes in a, in a mature fashion that will foes Le less outages. >>Yeah. A lot of people getting PagerDuty's on Friday, Saturday, and Sunday, and on the old way, new way, new, you don't want to break up your Friday night after a nice dinner, either rock, do you know? Well, thanks for coming in all the way from Tel-Aviv really appreciate it. I wish you guys, everything the best over there in Delhi, we will see you at the event that's coming up. We're looking forward to the code to cloud summit and all the great insight you guys will have. Thanks for coming on and sharing the story. Looking forward to talking more with you Brock thanks for all the insight on security infrastructures code and all the cool things you're doing at bridge crew. >>Thank you, John. >>Okay. This is the cube conversation here at Palo Alto, California. I'm John furrier hosted the cube. Thanks for watching.

Published Date : Mar 18 2022

SUMMARY :

host of the cube, and we have a great guest here. So one of the things I love about open source, and you're seeing a lot more of the trend now that talking about, And the thing that we asked ourselves The goal is to do that from the ID from the moment that you write code and also You know, the security teams do a review, you send a ticket, things are waiting, stop, wait, hurry up and wait kind of thing. And honestly, a lot of the work is repetitive and can How does that impact the scale piece become into here? And the thing that you can do is you can inspect all kinds of best practices, I want to really quickly ask you about the event. all of the supply chain from securing the CCD itself, You know, one of the things that you bring up when you hear you talking is that's the range of, of infrastructure as code. And the stage after that is having some kind of And the way that it has evolved mass adoption of infrastructure as code And if you can factor in some of the things like, like threat vectors, So you guys are scanning it doing things, but it's also huge system. So when you buy your next Chevy And the thing that we will And you mentioned Kubernetes, that's one of the areas that have a lot more work to do or being worked So you can find anything, every large growler scale What are some of the benefits that can you share personally, or for the folks watching And the different dependencies between and all the great insight you guys will have. I'm John furrier hosted the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Barack ShusterPERSON

0.99+

JohnPERSON

0.99+

DelhiLOCATION

0.99+

Barak SchosterPERSON

0.99+

BrockPERSON

0.99+

two piecesQUANTITY

0.99+

FordORGANIZATION

0.99+

RonPERSON

0.99+

Tel-AvivLOCATION

0.99+

SundayDATE

0.99+

SaturdayDATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Friday nightDATE

0.99+

two problemsQUANTITY

0.99+

60 virtual serversQUANTITY

0.99+

FridayDATE

0.99+

hundreds of contributorsQUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

ChevyORGANIZATION

0.99+

bothQUANTITY

0.99+

both themesQUANTITY

0.99+

OneQUANTITY

0.98+

100 companiesQUANTITY

0.98+

oneQUANTITY

0.98+

Friday nightDATE

0.98+

one applianceQUANTITY

0.98+

todayDATE

0.98+

BrockORGANIZATION

0.98+

threeQUANTITY

0.98+

AWSORGANIZATION

0.98+

three years agoDATE

0.97+

this yearDATE

0.97+

firstQUANTITY

0.97+

John furrierPERSON

0.97+

one thingQUANTITY

0.96+

past yearDATE

0.95+

KubernetesORGANIZATION

0.94+

singleQUANTITY

0.94+

one resourceQUANTITY

0.91+

few years agoDATE

0.91+

TerraformORGANIZATION

0.91+

one piece of codeQUANTITY

0.86+

day oneQUANTITY

0.86+

one team memberQUANTITY

0.83+

PagerDutyORGANIZATION

0.83+

onceQUANTITY

0.8+

GCPORGANIZATION

0.78+

AzureORGANIZATION

0.76+

eachQUANTITY

0.72+

Palo AltoLOCATION

0.71+

PaloLOCATION

0.71+

SRSTITLE

0.71+

beefyORGANIZATION

0.7+

CDKORGANIZATION

0.68+

2022DATE

0.68+

KubernetesTITLE

0.67+

DEVENT

0.58+

CloudFormationTITLE

0.58+

AltoORGANIZATION

0.55+

two cloudEVENT

0.55+

every teamQUANTITY

0.54+

AsbellTITLE

0.53+

S3TITLE

0.52+

CGITITLE

0.5+

CooperORGANIZATION

0.5+

EsriPERSON

0.5+

bridgeORGANIZATION

0.49+

ConversationEVENT

0.42+

COVIDTITLE

0.39+

Toni Manzano, Aizon | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences


 

(up-tempo music) >> Welcome to today's session of the cube's presentation of the AWS startup showcase. The next big thing in AI security and life sciences. Today, we'll be speaking with Aizon, as part of our life sciences track and I'm pleased to welcome the co-founder as well as the chief science officer of Aizon: Toni Monzano, will be discussing how artificial intelligence is driving key processes in pharma manufacturing. Welcome to the show. Thanks so much for being with us today. >> Thank you Natalie to you and to your introduction. >> Yeah. Well, as you know industry 4.0 is revolutionizing manufacturing across many industries. Let's talk about how it's impacting biotech and pharma and as well as Aizon's contributions to this revolution. >> Well, actually pharmacogenetics is totally introducing a new concept of how to manage processes. So, nowadays the industry is considering that everything is particularly static, nothing changes and this is because they don't have the ability to manage the complexity and the variability around the biotech and the driving factor in processes. Nowadays, with pharma - technologies cloud, our computing, IOT, AI, we can get all those data. We can understand the data and we can interact in real time, with processes. This is how things are going on nowadays. >> Fascinating. Well, as you know COVID-19 really threw a wrench in a lot of activity in the world, our economies, and also people's way of life. How did it impact manufacturing in terms of scale up and scale out? And what are your observations from this year? >> You know, the main problem when you want to do a scale-up process is not only the equipment, it is also the knowledge that you have around your process. When you're doing a vaccine on a smaller scale in your lab, the only parameters you're controlling in your lab, they have to be escalated when you work from five liters to 2,500 liters. How to manage this different of a scale? Well, AI is helping nowadays in order to detect and to identify the most relevant factors involved in the process. The critical relationship between the variables and the final control of all the full process following a continued process verification. This is how we can help nowadays in using AI and cloud technologies in order to accelerate and to scale up vaccines like the COVID-19. >> And how do you anticipate pharma manufacturing to change in a post COVID world? >> This is a very good question. Nowadays, we have some assumptions that we are trying to overpass yet with human efforts. Nowadays, with the new situation, with the pandemic that we are living in, the next evolution that we are doing humans will take care about the good practices of the new knowledge that we have to generate. So AI will manage the repetitive tasks, all the human condition activity that we are doing, So that will be done by AI, and humans will never again do repetitive tasks in this way. They will manage complex problems and supervise AI output. >> So you're driving more efficiencies in the manufacturing process with AI. You recently presented at the United nations industrial development organization about the challenges brought by COVID-19 and how AI is helping with the equitable distribution of vaccines and therapies. What are some of the ways that companies like Aizon can now help with that kind of response? >> Very good point. Could you imagine you're a big company, a top pharma company, that you have an intellectual property of COVID-19 vaccine based on emergency and principle, and you are going to, or you would like to, expand this vaccination in order not to get vaccination, also to manufacture the vaccine. What if you try to manufacture these vaccines in South Africa or in Asia in India? So the secret is to transport, not only the raw material not only the equipment, also the knowledge. How to appreciate how to control the full process from the initial phase 'till their packaging and the vials filling. So, this is how we are contributing. AI is packaging all this knowledge in just AI models. This is the secret. >> Interesting. Well, what are the benefits for pharma manufacturers when considering the implementation of AI and cloud technologies. And how can they progress in their digital transformation by utilizing them? >> One of the benefits is that you are able to manage the variability the real complexity in the world. So, you can not create processes, in order to manufacture drugs, just considering that the raw material that you're using is never changing. You cannot consider that all the equipment works in the same way. You cannot consider that your recipe will work in the same way in Brazil than in Singapore. So the complexity and the variability is must be understood as part of the process. This is one of the benefits. The second benefit is that when you use cloud technologies, you have not a big care about computing's licenses, software updates, antivirals, scale up of cloud ware computing. Everything is done in the cloud. So well, this is two main benefits. There are more, but this is maybe the two main ones. >> Yeah. Well, that's really interesting how you highlight how this is really. There's a big shift how you handle this in different parts of the world. So, what role does compliance and regulation play here? And of course we see differences the way that's handled around the world as well. >> Well, I think that is the first time the human race in the pharma - let me say experience - that we have a very strong commitment from the 30 bodies, you know, to push forward using this kind of technologies actually, for example, the FDA, they are using cloud, to manage their own system. So why not use them in pharma? >> Yeah. Well, how does AWS and Aizon help manufacturers address these kinds of considerations? >> Well, we have a very great partner. AWS, for us, is simplifying a lot our life. So, we are a very, let me say different startup company, Aizon, because we have a lot of PhDs in the company. So we are not in the classical geeky company with guys all day parameter developing. So we have a lot of science inside the company. So this is our value. So everything that is provided by Amazon, why we have to aim to recreate again so we can rely on Sage Maker. we can rely on Cogito, we can rely on Landon we can rely on Esri to have encryption data with automatic backup. So, AWS is simplifying a lot of our life. And we can dedicate all our knowledge and all our efforts to the things that we know: pharma compliance. >> And how do you anticipate that pharma manufacturing will change further in the 2021 year? Well, we are participating not only with business cases. We also participate with the community because we are leading an international project in order to anticipate this kind of new breakthroughs. So, we are working with, let me say, initiatives in the - association we are collaborating in two different projects in order to apply AI in computer certification in order to create more robust process for the MRA vaccine. We are collaborating with the - university creating the standards for AI application in GXP. We collaborating with different initiatives with the pharma community in order to create the foundation to move forward during this year. >> And how do you see the competitive landscape? What do you think Aizon provides compared to its competitors? >> Well, good question. Probably, you can find a lot of AI services, platforms, programs softwares that can run in the industrial environment. But I think that it will be very difficult to find a GXP - a full GXP-compliant platform working on cloud with AI when AI is already qualified. I think that no one is doing that nowadays. And one of the demonstration for that is that we are also writing some scientific papers describing how to do that. So you will see that Aizon is the only company that is doing that nowadays. >> Yeah. And how do you anticipate that pharma manufacturing will change or excuse me how do you see that it is providing a defining contribution to the future of cloud-scale? >> Well, there is no limits in cloud. So as far as you accept that everything is varied and complex, you will need power computing. So the only way to manage this complexity is running a lot of power computation. So cloud is the only system, let me say, that allows that. Well, the thing is that, you know pharma will also have to be compliant with the cloud providers. And for that, we created a new layer around the platform that we say qualification as a service. We are creating this layer in order to qualify continuously any kind of cloud platform that wants to work on environment. This is how we are doing that. >> And in what areas are you looking to improve? How are you constantly trying to develop the product and bring it to the next level? >> Always we have, you know, in mind the patient. So Aizon is a patient-centric company. Everything that we do is to improve processes in order to improve at the end, to deliver the right medicine at the right time to the right patient. So this is how we are focusing all our efforts in order to bring this opportunity to everyone around the world. For this reason, for example, we want to work with this project where we are delivering value to create vaccines for COVID-19, for example, everywhere. Just packaging the knowledge using AI. This is how we envision and how we are acting. >> Yeah. Well, you mentioned the importance of science and compliance. What do you think are the key themes that are the foundation of your company? >> The first thing is that we enjoy the task that we are doing. This is the first thing. The other thing is that we are learning every day with our customers and for real topics. So we are serving to the patients. And everything that we do is enjoying science enjoying how to achieve new breakthroughs in order to improve life in the factory. We know that at the end will be delivered to the final patient. So enjoying making science and creating breakthroughs; being innovative. >> Right, and do you think that in the sense that we were lucky, in light of COVID, that we've already had these kinds of technologies moving in this direction for some time that we were somehow able to mitigate the tragedy and the disaster of this situation because of these technologies? >> Sure. So we are lucky because of this technology because we are breaking the distance, the physical distance, and we are putting together people that was so difficult to do that in all the different aspects. So, nowadays we are able to be closer to the patients to the people, to the customer, thanks to these technologies. Yes. >> So now that also we're moving out of, I mean, hopefully out of this kind of COVID reality, what's next for Aizon? Do you see more collaboration? You know, what's next for the company? >> The next for the company is to deliver AI models that are able to be encapsulated in the drug manufacturing for vaccines, for example. And that will be delivered with the full process not only materials, equipment, personnel, recipes also the AI models will go together as part of the recipe. >> Right, well, we'd love to hear more about your partnership with AWS. How did you get involved with them? And why them, and not another partner? >> Well, let me explain to you a secret. Seven years ago, we started with another top cloud provider, but we saw very soon, that this other cloud provider were not well aligned with the GXP requirements. For this reason, we met with AWS. We went together to some seminars, conferences with top pharma communities and pharma organizations. We went there to make speeches and talks. We felt that we fit very well together because AWS has a GXP white paper describing very well how to rely on AWS components. One by one. So this is for us, this is a very good credential, when we go to our customers. Do you know that when customers are acquiring and are establishing the Aizon platform in their systems, they are outbidding us. They are outbidding Aizon. Well we have to also outbid AWS because this is the normal chain in pharma supplier. Well, that means that we need this documentation. We need all this transparency between AWS and our partners. This is the main reason. >> Well, this has been a really fascinating conversation to hear how AI and cloud are revolutionizing pharma manufacturing at such a critical time for society all over the world. Really appreciate your insights, Toni Monzano: the chief science officer and co-founder of Aizon. I'm your host, Natalie Erlich, for the Cube's presentation of the AWS startup showcase. Thanks very much for watching. (soft upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS startup showcase. and to your introduction. contributions to this revolution. and the variability around the biotech in a lot of activity in the world, the knowledge that you the next evolution that we are doing in the manufacturing process with AI. So the secret is to transport, considering the implementation You cannot consider that all the equipment And of course we see differences from the 30 bodies, you and Aizon help manufacturers to the things that we in order to create the is that we are also to the future of cloud-scale? So cloud is the only system, at the right time to the right patient. the importance of science and compliance. the task that we are doing. and we are putting in the drug manufacturing love to hear more about This is the main reason. of the AWS startup showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Toni MonzanoPERSON

0.99+

Natalie ErlichPERSON

0.99+

AWSORGANIZATION

0.99+

NataliePERSON

0.99+

AizonORGANIZATION

0.99+

SingaporeLOCATION

0.99+

BrazilLOCATION

0.99+

South AfricaLOCATION

0.99+

AmazonORGANIZATION

0.99+

AsiaLOCATION

0.99+

COVID-19OTHER

0.99+

oneQUANTITY

0.99+

2,500 litersQUANTITY

0.99+

five litersQUANTITY

0.99+

2021 yearDATE

0.99+

30 bodiesQUANTITY

0.99+

TodayDATE

0.99+

second benefitQUANTITY

0.99+

IndiaLOCATION

0.99+

Toni ManzanoPERSON

0.99+

OneQUANTITY

0.99+

two main benefitsQUANTITY

0.99+

pandemicEVENT

0.98+

todayDATE

0.98+

two different projectsQUANTITY

0.98+

COVIDOTHER

0.97+

Seven years agoDATE

0.97+

two main onesQUANTITY

0.97+

this yearDATE

0.96+

LandonORGANIZATION

0.95+

first thingQUANTITY

0.92+

FDAORGANIZATION

0.89+

MRAORGANIZATION

0.88+

CubeORGANIZATION

0.85+

United nationsORGANIZATION

0.82+

first timeQUANTITY

0.8+

Sage MakerTITLE

0.77+

Startup ShowcaseEVENT

0.73+

GXPORGANIZATION

0.64+

EsriORGANIZATION

0.64+

GXPTITLE

0.6+

CogitoORGANIZATION

0.6+

AizonTITLE

0.57+

benefitsQUANTITY

0.36+

GXPCOMMERCIAL_ITEM

0.36+

Tech Titans and the Confluence of the Data Cloud L3Fix


 

>>with me or three amazing guest Panelists. One of the things that we can do today with data that we say weren't able to do maybe five years ago. >>Yes, certainly. Um, I think there's lots of things that we can integrate specific actions. But if you were to zoom out and look at the big picture, our ability to reason through data to inform our choices to data with data is bigger than ever before. There are still many companies have to decide to sample data or to throw away older data, or they don't have the right data from from external companies to put their decisions and actions in context. Now we have the technology and the platforms toe, bring all that data together, tear down silos and look 3 60 of a customer or entire action. So I think it's reasoning through data that has increased the capability of organizations dramatically in the last few years. >>So, Milan, when I was a young pup at I D. C. I started the storage program there many, many moons ago, and and so I always pay attention to what's going on storage back in my mind. And as three people forget. Sometimes that was actually the very first cloud product announced by a W s, which really ushered in the cloud era. And that was 2006 and fundamentally changed the way we think about storing data. I wonder if you could explain how s three specifically and an object storage generally, you know, with get put really transform storage from a blocker to an enabler of some of these new workloads that we're seeing. >>Absolutely. I think it has been transformational for many companies in every industry. And the reason for that is because in s three you can consolidate all the different data sets that today are scattered around so many companies, different data centers. And so if you think about it, s three gives the ability to put on structure data, which are video recordings and images. It puts semi structured data, which is your CSP file, which every company has lots of. And it has also support for structure data types like parquet files which drive a lot of the business decisions that every company has to make today. And so if you think about S three, which launched on Pi Day in March of 2000 and six s three started off as an object store, but it has evolved into so much more than that where companies all over the world, in every industry are taking those different data sets. They're putting it in s three. They're growing their data and then they're growing the value that they capture on top of that data. And that is the separation we see that snowflake talks about. And many of the pioneers across different industries talk about which is a separation of the growth of storage and the growth of your computer applications. And what's happening is that when you have a place to put your data like s three, which is secure by default and has the availability in the durability of the operational profile, you know, and can trust, then the innovation of the application developers really take over. And you know, one example of that is where we have a customer and the financial sector, and they started to use us three to put their customer care recordings, and they were just using it for storage because that obviously data set grows very quickly, and then somebody in their fraud department got the idea of doing machine learning on top of those customer care recordings. And when they did that, they found really interesting data that they could then feed into their fraud detection models. And so you get this kind of alchemy of innovation that that happens when you take the data sets of today and yesterday and tomorrow you put them all in one place, which is dust free and the innovation of your application. Developers just takes over and builds not just what you need today, but what you need in the future as well. >>Thank you for that Mark. I want to bring you into this panel. It's it's great to have you here, so so thank you. I mean, Tableau has been a game changer for organizations. I remember my first by tableau conference, passionate, uh, customers and and really bringing cloud like agility and simplicity. Thio visualization just totally change the way people thought about data and met with massive data volumes and simplified access. And now we're seeing new workloads that are developing on top of data and snowflake data in the cloud. Can you talk about how your customers are really telling stories and bringing toe life those stories with data on top of things like, that's three, which my mom was just talking about. >>Yeah, for sure. Building on what Christian male I have already said you are. Our mission tableau has always been to help people see and understand data. And you look at the amazing advances they're happening in storage and data processing and now you, when you that the data that you can see and play with this so amazing, right? Like at this point in time, yeah, it's really nothing short of a new microscope or a new telescope that really lets you understand patterns. They were always there in the world, but you literally couldn't see them because of the limitations of the amount of data that you could bring into the picture because of the amount of processing power in the amount of sharing of data that you could bring into the picture. And now, like you said, these three things are coming together. This amazing ability to see and tell stories with your data, combined with the fact that you've got so much more data at your fingertips, the fact that you can now process that data. Look at that data. Share that data in ways that was never possible. Again, I'll go back to that analogy. It feels like the invention of a new microscope, a new telescope, a new way to look at the world and tell stories and get thio. Insights that were just were never possible before. >>So thank you for that. And Christian, I want to come back to this notion of the data cloud, and, you know, it's a very powerful concept, and of course it's good marketing. But But I wonder if you could add some additional color for the audience. I mean, what more can you tell us about the data cloud, how you're seeing it, it evolving and maybe building on some of the things that Mark was just talking about just in terms of bringing this vision into reality? >>Certainly. Yeah, Data Cloud, for sure, is bigger and more concrete than than just the marketing value of it. The big insight behind our vision for the data cloud is that just a technology capability, just a cloud data platform is not what gets organizations to be able to be, uh, data driven to be ableto make great use of data or be um, highly capable in terms of data ability. Uh, the other element beyond technology is the access and availability off Data toe put their own data in context or enrich, based on the no literal data from other third parties. So the data cloud the way to think about it is is a combination of both technology, which for snowflake is our cloud data platform and all. The work loves the ability to do data warehousing, enquiries and speeds and feeds fit in there and data engineering, etcetera. But it's also how do we make it easier for our customers to have access to the data they need? Or they could benefit to improve the decisions for for their own organizations? Think of the analogy off a set top box. I can give you a great, technically set top box, but if there's no content on the other side, it makes it difficult for you to get value out of it. That's how we should all be thinking about the data cloud. It's technology, but it's also seamless access to data >>in my life. Can >>you give us >>a sense of the scope And what kind of scale are you seeing with snowflake on on AWS? >>Well, Snowflake has always driven as Christian. That was a very high transaction rate, the S three. And in fact, when Chris and I were talking, uh, just yesterday we were talking about some of the things that have really been, um, been remarkable about the long partnership that we've had over the years. And so I'll give you an example of of how that evolution has really worked. So, as you know, as three has eyes, you know, the first a W s services launched, and we have customers who have petabytes hundreds of petabytes and exabytes of storage in history. And so, from the ground up, s three has been built for scale. And so when we have customers like Snowflake that have very high transaction rates for requests for ESRI storage, we put our customer hat on and we asked, we asked customers like like, Snowflake, how do you think about performance? Not just what performance do you need, but how do you think about performance? And you know, when Christians team were walking through the demands of making requests? Two, there s three data. They were talking about some pretty high spikes over time and just a lot of volume. And so when we built improvements into our performance over time, we put that hat on for work. You know, Snowflake was telling us what they needed, and then we built our performance model not around a bucket or an account. We built it around a request rate per prefix, because that's what Snowflake and other customers told us they need it. And so when you think about how we scale our performance, we Skillet based on a prefix and not a popular account, which other cloud providers dio, we do it in this unique way because 90% of our customer roadmap across AWS comes from customer request. And that's what Snowflake and other customers were saying is that Hey, I think about my performance based on a prefix of an object and not some, you know, arbitrary semantic of how I happened to organize my buckets. I think the other thing I would also throw out there for scale is, as you might imagine, s Tree is a very large distributed system. And again, if I go back to how we architected for our performance improvements. We architected in such a way that a customer like snowflake could come in and they could take advantage of horizontally scaling. They can do parallel data retrievals and puts in gets for your data. And when they do that, they can get tens of thousands of requests for second because they're taking advantage of the scale of s tree. And so you know when when when we think about scale, it's not just scale, which is the growth of your storage, which every customer needs. I D. C says that digital data is growing at 40% year over year, and so every customer needs a place to put all of those storage sets that are growing. But the way we also to have worked together for many years is this. How can we think about how snowflake and other customers are driving these patterns of access on top of the data, not just elasticity of the storage, but the access. And then how can we architect, often very uniquely, as I talked about with our request rate in such a way that they can achieve what they need to do? Not just today but in the future, >>I don't know you. Three companies here there don't often take their customer hats off. Mark, I wonder if you could come to you. You know, during the Data Cloud Summit, we've been exploring this notion that innovation in technology is really evolved from point products. You know, the next generation of server or software tool toe platforms that made infrastructure simpler, uh, are called functions. And now it's evolving into leveraging ecosystems. You know, the power of many versus the resource is have one. So my question is, you know, how are you all collaborating and creating innovations that your customers could leverage? >>Yeah, for sure. So certainly, you know, tableau and snowflake, you know, kind of were dropped that natural partners from the beginning, right? Like putting that visualization engine on top of snowflake thio. You know, combine that that processing power on data and the ability to visualize it was obvious as you talk about the larger ecosystem. Now, of course, tableau is part of salesforce. Um and so there's a much more interesting story now to be told across the three companies. 1, 2.5, maybe a zoo. We talk about tableau and salesforce combined together of really having this full circle of salesforce. You know, with this amazing set of business APS that so much value for customers and getting the data that comes out of their salesforce applications, putting it into snowflakes so that you can combine that share, that you process it, combine it with data not just for across salesforce, but from your other APS in the way that you want and then put tableau on top of it. Now you're talking about this amazing platform ecosystem of data, you know, coming from your most valuable business applications in the world with the most, you know, sales opportunity, objects, marketing service, all of that information flowing into this flexible data platform, and then this amazing visualization platform on top of it. And there's really no end of the things that our customers can do with that combination. >>Christian, we're out of time. But I wonder if you could bring us home and I want to end with, you know, let's say, you know, people. Some people here, maybe they don't Maybe they're still struggling with cumbersome nature of let's say they're on Prem data warehouses. You know the kids just unplug them because they rely on them for certain things, like reporting. But But let's say they want to raise the bar on their data and analytics. What would you advise for the next step? For them? >>I think the first part or first step to take is around. Embrace the cloud and they promise and the abilities of cloud technology. There's many studies where relative to peers, companies that embracing data are coming out ahead and outperforming their peers and with traditional technology on print technology. You ended up with a proliferation of silos and copies of data, and a lot of energy went into managing those on PREM systems and making copies and data governance and security and cloud technology. And the type of platform the best snowflake has brought to market enables organizations to focus on the data, the data model, data insights and not necessarily on managing the infrastructure. So I think that with the first recommended recommendation from from our end embraced cloud, get into a modern cloud data platform, make sure you're spending your time on data not managing infrastructure and seeing what the infrastructure lets you dio. >>Okay, this is Dave, Volunteer for the Cube. Thank you for watching. Keep it right there with mortgage rate content coming your way.

Published Date : Nov 20 2020

SUMMARY :

One of the things that we can do today with data But if you were to zoom out and look at the big picture, our ability to reason through data I wonder if you could explain how s three specifically and an object storage generally, And what's happening is that when you have a place to put your data like s three, It's it's great to have you here, so so thank you. the fact that you can now process that data. But But I wonder if you could add the other side, it makes it difficult for you to get value out of it. in my life. And so when you think about how we So my question is, you know, how are you in the world with the most, you know, sales opportunity, objects, marketing service, But I wonder if you could bring us home and I want to end with, you know, let's say, And the type of platform the best snowflake has brought to market enables Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

90%QUANTITY

0.99+

2006DATE

0.99+

DavePERSON

0.99+

March of 2000DATE

0.99+

TwoQUANTITY

0.99+

40%QUANTITY

0.99+

firstQUANTITY

0.99+

three dataQUANTITY

0.99+

MarkPERSON

0.99+

AWSORGANIZATION

0.99+

Three companiesQUANTITY

0.99+

tomorrowDATE

0.99+

bothQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

five years agoDATE

0.99+

three companiesQUANTITY

0.99+

Data Cloud SummitEVENT

0.99+

three peopleQUANTITY

0.99+

first partQUANTITY

0.99+

SnowflakeTITLE

0.98+

first stepQUANTITY

0.98+

todayDATE

0.98+

hundreds of petabytesQUANTITY

0.98+

one placeQUANTITY

0.98+

threeQUANTITY

0.98+

one exampleQUANTITY

0.98+

secondQUANTITY

0.98+

tens of thousandsQUANTITY

0.96+

three thingsQUANTITY

0.96+

ESRIORGANIZATION

0.94+

snowflakeTITLE

0.94+

sixTITLE

0.91+

ChristianORGANIZATION

0.91+

s threeTITLE

0.9+

oneQUANTITY

0.89+

SnowflakeORGANIZATION

0.89+

three amazing guest PanelistsQUANTITY

0.87+

3 60QUANTITY

0.85+

threeTITLE

0.84+

I D. C.LOCATION

0.84+

ChristiansORGANIZATION

0.83+

yearsDATE

0.83+

first cloud productQUANTITY

0.83+

many moons agoDATE

0.82+

MilanPERSON

0.82+

lastDATE

0.79+

Data CloudORGANIZATION

0.77+

CubeORGANIZATION

0.73+

S threeTITLE

0.7+

TableauORGANIZATION

0.7+

Cloud L3FixTITLE

0.69+

exabytesQUANTITY

0.68+

D. CPERSON

0.62+

S threeCOMMERCIAL_ITEM

0.6+

Pi DayEVENT

0.59+

TechORGANIZATION

0.59+

cloudORGANIZATION

0.58+

DataORGANIZATION

0.56+

2.5QUANTITY

0.45+

TreeTITLE

0.44+

tableauORGANIZATION

0.42+

Matt Kixmoeller, Pure Storage & Michael Ferranti, Portworx | Kubecon + CloudNativeCon NA 2020


 

>> Narrator: From around the globe, it's theCUBE. With coverage of KubeCon and CloudNativeCon North America 2020, virtual. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hi, I'm Joep Piscaer. Welcome to theCUBEs coverage of KubeCon, CloudNativeCon 2020. So I'm joined today by Matt Kixmoeller, he's VP of strategy at Pure Storage, as well as Michael Ferranti, he's the senior director of product marketing at Portworx now acquired by Pure Storage. Fellows, welcome to the show. >> Thanks here. >> I want to start out with you know , how about the lay of the land of storage in the Cloud Native space in the Kubernetes space. You know, what's hard? what's happening? What are the trends that you see going on? Matt, if you could shed some light on that for me? >> Yeah, I think you know, from a Pure point of view obviously we just told customers will they maturing their comprehensive deployments and particularly leaning towards persistant, you know applications and so you know we noticed within our customer base that there was quite a lot of deployments of a Portworx on Pure Storage. And that inspired us to start talking to one another you know, almost six plus months ago that eventually ended in us bringing the two companies together. So it's been a great journey from the Pure point of view, bringing Portworx into the Pure family. And, you know, we're working through now with our joint customers, integration strategies and how to really broaden the use of the technology. So that's quite exciting times for us. >> And of course, it's good to hear that the match goes beyond just the marketing color, like the brand color. >> Absolutely. Yeah. I mean, the fact that both companies were orange and you know, their logo looked like kind of a folded up version of ours, just started things off on the right foot >> A match made in heaven, right? So I want to talk a little bit about you know, the acquisition, what's happened there and especially, you know looking at Portworx as a company, and as a product set, it's fairly popular in the cloud community. A lot of traction with customers. So I want to zoom in on the acquisition itself and kind of the roadmap going forward merging the two companies and adding Portworx to that Pure portfolio. Matt, if you could shed some light on that as well. >> Yeah. Why don't I start and then Michael can jump in as well? So, you know, we at Pure had been really working for years now to outfit our all flash storage arrays for the container use case and shipped a piece of software that we call PSO. That was really a super CSI driver that allowed us to do intelligent placement of you know, persistent volumes on Pure arrays. But the more time we spent in the market, the more we just started to engage with customers and realized that there were a whole number of use cases that didn't really want a hardware based solution, you know. They either wanted to run completely in the cloud, hybrid between on-prem and cloud and leverage bare metal hardware. And so you know, we came to the conclusion that you know, first off, although positioning arrays for the market was the right thing to do, we wouldn't really be able to serve the broader needs restoratively for containers, if you did that. And then, you know, the second thing I think was that we heard from customers that they wanted a much richer data management stack. You know, it's not just about providing the business versus the volume for the container, but you know, all the capabilities around snapshoting and replication and mobilization and mobility between on-prem and cloud were necessary. And so, you know, Portworx we bought to bear not only a software based solution into our portfolio, but really that full data management stack platform in addition to just storage. And so as we look to integrate our product lines you know, we're looking to deliver a consistent experience for data management, for Kubernetes whatever infrastructure customer would like to, whether they want to run on all flash arrays, white box servers, bare metal, VMs or on cloud storage as well. You know, all of that can have a consistent experience with the Portworx platform. >> Yeah, and because you know, data management especially in this world of containers is you know, it's a little more difficult it's definitely more fragmented across you know, multiple clouds, multiple cloud vendors, multiple cloud services, multiple instances of a service. So the fragmentation has you know, given IT departments quite the headache in operationally managing all that. So Michael you know, what's kind of the use case for Portworx in this fragmented cloud storage space. >> Yeah. It's a great question. You know, the used cases are many and varied, you know to put it in a little bit of historical perspective you know, I've been attending coupons either (indistinct) for about five or six years now, kind of losing count. And we really started seeing Kubernetes as kind of an agile way to run CI/CD environments and other test dev environments. And there were just a handful of customers that were really running production workloads at the very, very beginning. If you fast forward to today, Kubernetes is being used to tackle some of the biggest central board level problems that enterprises face, because they need that scale and they need that agility. So you know, COVID's accelerated that. So we see customers say in the retail space, who are having to cope with a massive increase in traffic on their website. People searching for kind of you know, the products that they can't find anywhere else. Are they available? Can I buy them online? And so they're re-architecting those web services to use often open source databases in this case Elasticsearch, in order to create a great user experiences. And they're managing that across clouds and across environments using Kubernetes. Another customer that I would say kind of a very different use case but also one that matches that scale would be Esri which unfortunately the circumstances of becoming a household name are a lot of the covert tracking ArcGIS system to keep track of, tracing and outbreaks. They're running that service in the cloud using Portworx. And again, it's all about how do we reliably and agilely deploy applications that are always available and create that experience that our customers need. And so we see kind of you know, financial services doing similar things healthcare, pharmaceutical, doing similar things. Again, the theme is it's the biggest business problems that we're using now, not just the kind of the low hanging fruit as we used to talk about. >> Yeah exactly. Because you know storage, is it a lot of the times it's kind of a boiler plate functionality you know, it's there it works. And if it doesn't, you know, the problem with storage in a cloud data space is that fragmentation right? Is that enormous you know, on the one hand that you don't have a scale on the other hand, the tons of different services that can hold data that need protecting as well as data management. So I want to zoom in on a recent development in the Portworx portfolio where the PX backup product has spun out its own little product. You know, what's the strategy there, Michael? >> Yeah, so I think, you know fundamentally data protection needs to change in a Kubernetes context. The way in which we protected applications in the past was very closely related to the way in which we protected servers. Because we would run one app per server. So if we protected the server our application was protected. Kubernetes breaks that model now an individual application is made up of dozens or hundreds of components that are spread across multiple servers. And you have container images, you have configuration I mean you have data, and it's very difficult for any one person to understand where any of that is in the cluster at any given moment. And so you need to leverage automation and the ability for Kubernetes to understand where a particular set of components is deployed and use that Kubernetes native functionality to take what we call application aware backups. So what PX backup provides is data protection engineered from the ground up for this new application delivery model that we see within Kubernetes. So unlike traditional backup and recovery solutions that were very machine focused, we can allow a team to back up a single application within their Kubernetes cluster, all of the applications in a namespace or the entire cluster all at once, and do so in a self-service manner where integrated with your corporate identity systems individuals can be responsible for protecting their own applications. So we marry kind of a couple of really important concepts. One is kind of the application specific nature of Kubernetes the self service desire of DevOps teams, as well as with the page you go model, where you can have this flexible consumption model, where as you grow, you can pay more. You don't have to do an upfront payment in order to protect your Kubernetes applications. >> Yeah. I think one key thing that Michael hit on was just how this obligation is designed to fit like a glove with the Kubernetes admin. I see a lot of parallels to what happened over a decade ago in the VMware space when you know, VMware came about they needed to be backed up differently. And a little company called Veem built a tool that was purpose-built for it. And it just had a really warm embrace by the VMware community because it really felt like it was built for them, not some legacy enterprise backup application that was forced to fit into this new use case. And you know, we think that the opportunity is very similar on Kubernetes backup and perhaps the difference of the environment is even more profound than on the VMware side where you know, the Kubernetes admin really wants something that fits in their operational model, deploys within the cluster itself, backs up to object storage. Is just perfect purpose-built for this use case. And so we see a huge opportunity for that, and we believe that for a lot of customers, this might be the easiest place for them to start trying to Portworx portfolio. You know, you've got an existing competitors cluster download this, give it a shot, it'll work on any instructions you've got going with Kubernetes today. >> And especially because, you know, looking at the kind of breakdown of Kubernetes in a way data is, you know, infrastructure is provisioned. Data is placing in cloud services. It's no longer the cluster admin necessarily, that gets to decide where data goes, what application has access to it, you know, that's in the hands of the developers. And that's a pretty big shift you know, it used to be the VI admin the virtualization admin that did that, had control over where data was living, where data was accessed out, how it was accessed. But now we see developers kind of taking control over their infrastructure resources. They get to decide where it runs, how it runs what services to use, what applications to tie it into. So I'm curious, you know, how our Portworx and PX backup kind of help the developer stay in control and still have that freedom of choice. >> Yeah, we think of it in terms of data services. So I have a database and I needed to be highly available. I needed to be encrypted, backed up. I might need a DR. An off site DR schedule. And with Portworx, you can think about adding these services HA, security, backup, capacity management as really just I want to check a box and now I have this service available. My database is now highly available. It's backed up, it's encrypted. I can migrate it. I can attach a backup schedule to it. So 'cause within a Kubernetes cluster some apps are going to need that entire menu of services. And some apps might not need any of those services because we're only in Testa phage, everything is multiplexed into a single cluster. And so being able to turn off and turn on these various data services is how we empower a developer, a DevOps team to take an application all the way from test dev, into production, without having to really change anything about their Kubernetes deployments besides, you know, a flag within their YAML file. It makes it really, really easy to get the performance and the security and the availability that we were used to with VM based applications via that admin now within Kubernetes. >> So Matt, I want to spend the last couple of minutes talking about the bigger picture, right? We've talked about Portworx, PX backup. I want to take a look at the broader storage picture of cloud native and kind of look at the Pure angle on the trends on what you see happening in this space. >> Yeah absolutely. You know, a couple of high-level things I would, you know, kind of talk about, you know, the first buzz that I think, you know hybrid cloud deployments are the de facto now. And so when people are picking storage, whether they be you know, a storage for a traditional database application or next gen application, cloud native application, the thought from the beginning is how do I architect for hybrid? And so you know, within the Pure portfolio, we've really thought about how we build solutions that work with cloud native apps like Portworx, but also traditional applications. And our cloud block store allows you know, those to be mobilized to the cloud without, with minimal re-architecture. Another big trend that we see is the growth of object storage. And, you know if you look at the first generation of object storage, object storage is what? 15 plus years old and many of the first deployments were characterized by really low costs low performance, kind of the last retention layer if you will, for unimportant content. But then this web application thing happens and people started to build web apps that used object storage as their primary storage. And so now, as people try to bring those cloud native applications on-prem and build them in a multicloud way there's a real growth in the need for you know, high-performance kind of applications object storage. And so we see this real change to the needs and requirements on the object storage landscape. And it's one that in particular, we're trying to serve with our FlashBlade product that provides a unified file and object access, because many of those applications are kind of graduating from file or moving towards object, but they can't do that overnight. And so being able to provide a high-performance way to deliver unstructured data (indistinct) object files solve is very strategic right now. >> Well, that's insightful. Thanks. So I want to thank you both for being here. And, you know, I look forward to hearing about Portworx and Pure in the future as is acquisition. You know, it integrates and new products and new developments come out from the Pure side. So thanks both for being here and thank you at home for watching. I'm Joep Piscaer, thanks for watching the theCUBE's coverage of KubeCon CloudNativeCon 2020. Thanks. >> Yeah. Thanks too. >> Yeah. Thank you. (gentle music)

Published Date : Nov 19 2020

SUMMARY :

Brought to you by Red Hat, he's the senior director What are the trends that you see going on? Yeah, I think you know, beyond just the marketing and you know, their logo looked like and kind of the roadmap going forward And so you know, we came So the fragmentation has you know, And so we see kind of you know, And if it doesn't, you know, One is kind of the application And you know, we think and PX backup kind of help the developer and the availability that we were used to and kind of look at the the need for you know, And, you know, I look forward to hearing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael FerrantiPERSON

0.99+

Joep PiscaerPERSON

0.99+

Matt KixmoellerPERSON

0.99+

MichaelPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

PortworxORGANIZATION

0.99+

two companiesQUANTITY

0.99+

MattPERSON

0.99+

both companiesQUANTITY

0.99+

KubeConEVENT

0.99+

Pure StorageORGANIZATION

0.99+

hundredsQUANTITY

0.99+

dozensQUANTITY

0.99+

bothQUANTITY

0.98+

VeemORGANIZATION

0.98+

KubernetesTITLE

0.98+

VMwareORGANIZATION

0.98+

second thingQUANTITY

0.98+

firstQUANTITY

0.98+

one appQUANTITY

0.98+

ArcGISTITLE

0.97+

first deploymentsQUANTITY

0.97+

OneQUANTITY

0.97+

PortworxTITLE

0.97+

first generationQUANTITY

0.97+

ElasticsearchTITLE

0.97+

CloudNativeConEVENT

0.97+

todayDATE

0.96+

PureORGANIZATION

0.96+

CloudNativeCon North America 2020EVENT

0.96+

six yearsQUANTITY

0.95+

about fiveQUANTITY

0.94+

one personQUANTITY

0.94+

PXORGANIZATION

0.94+

one key thingQUANTITY

0.94+

single applicationQUANTITY

0.93+

15 plus years oldQUANTITY

0.93+

CloudNativeCon 2020EVENT

0.93+

six plus months agoDATE

0.89+

single clusterQUANTITY

0.87+

theCUBEORGANIZATION

0.85+

KubeconORGANIZATION

0.8+

EsriTITLE

0.8+

COVIDTITLE

0.79+

a decade agoDATE

0.77+

Stuti Deshpande, AWS | Smart Data Marketplaces


 

>> Announcer: From around the globe it's theCUBE with digital coverage of smart data marketplaces brought to you by Io Tahoe. >> Hi everybody, this is Dave Vellante. And welcome back. We've been talking about smart data. We've been hearing Io Tahoe talk about putting data to work and keep heart of building great data outcomes is the Cloud of course, and also Cloud native tooling. Stuti Deshpande is here. She's a partner solutions architect for Amazon Web Services and an expert in this area. Stuti, great to see you. Thanks so much for coming on theCUBE. >> Thank you so much for having me here. >> You're very welcome. So let's talk a little bit about Amazon. I mean, you have been on this machine learning journey for quite sometime. Take us through how this whole evolution has occurred in technology over the period of time. Since the Cloud really has been evolving. >> Amazon in itself is a company, an example of a company that has gotten through a multi year machine learning transformation to become the machine learning driven company that you see today. They have been improvising on original personalization model using robotics to all different women's centers, developing a forecasting system to predict the customer needs and improvising on that and reading customer expectations on convenience, fast delivery and speed, from developing natural language processing technology for end user infraction, to developing a groundbreaking technology such as Prime Air jobs to give packages to the customers. So our goal at Amazon With Services is to take this rich expertise and experience with machine learning technology across Amazon, and to work with thousands of customers and partners to handle this powerful technology into the hands of developers or data engineers of all levels. >> Great. So, okay. So if I'm a customer or a partner of AWS, give me the sales pitch on why I should choose you for machine learning. What are the benefits that I'm going to get specifically from AWS? >> Well, there are three main reasons why partners choose us. First and foremost, we provide the broadest and the deepest set of machine learning and AI services and features for your business. The velocity at which we innovate is truly unmatched. Over the last year, we launched 200 different services and features. So not only our pace is accelerating, but we provide fully managed services to our customers and partners who can easily build sophisticated AI driven applications and utilizing those fully managed services began build and train and deploy machine learning models, which is both valuable and differentiating. Secondly, we can accelerate the adoption of machine learning. So as I mentioned about fully managed services for machine learning, we have Amazon SageMaker. So SageMaker is a fully managed service that are any developer of any level or a data scientist can utilize to build complex machine learning, algorithms and models and deploy that at scale with very less effort and a very less cost. Before SageMaker, it used to take so much of time and expertise and specialization to build all these extensive models, but SageMaker, you can literally build any complex models within just a time of days or weeks. So to increase it option, AWS has acceleration programs just in a solution maps. And we also have education and training programs such as DeepRacer, which are enforces on enforcement learning and Embark, which actually help organization to adopt machine learning very readily. And we also support three major frameworks such as TensorFlow five charge, or they have separate teams who are dedicated to just focus on all these frameworks and improve the support of these frameworks for a wide variety of workloads. And finaly, we provide the most comprehensive platform that is optimized for machine learning. So when you think about machine learning, you need to have a data store where you can store your training sets, your test sets, which is highly reliable, highly scalable, and secure data store. Most of our customers want to store all of their data and any kind of data into a centralized repository that can be treated at the central source of fraud. And in this case from the Amazon Esri data store to build and endurance machine learning workflow. So we believe that we provide this capability of having the most comprehensive platform to build the machine learning workflow from internally. >> Great. Thank you for that. So I wanted, my next question is, this is a complicated situation for a lot of customers. You know, having the technology is one thing, but adoption is sort of everything. So I wonder if you could paint a picture for us and help us understand, how you're helping customers think about machine learning, thinking about that journey and maybe give us the context of what the ecosystem looks like? >> Sure. If someone can put up the belt, I would like to provide a picture representation of how AWS and fusion machine learning as three layers of stack. And moving on to next bill, I can talk about the bottom there. And bottom there as you can see over this screen, it's basically for advanced technologists advanced data scientists who are machine learning practitioners who work at the framework level. 90% of data scientists use multiple frameworks because multiple frameworks are adjusted and are suitable for multiple and different kinds of workloads. So at this layer, we provide support for all of the different types of frameworks. And the bottom layer is only for the advanced scientists and developers who are actually actually want to build, train and deploy these machine learning models by themselves and moving onto the next level, which is the middle layer. This layer is only suited for non-experts. So here we have SageMaker where it provides a fully managed service there you can build, tune, train and deploy your machine learning models at a very low cost and with very minimal efforts and at a higher scale, it removes all the complexity, heavy lifting and guesswork from this stage of machine learning and Amazon SageMaker has been the scene that will change. Many of our customers are actually standardizing on top off Amazon SageMaker. And then I'm moving on to the next layer, which is the top most layer. We call this as AI services because this may make the human recognition. So all of the services mentioned here such as Amazon Rekognition, which is basically a deep learning service optimized for image and video analysis. And then we have Amazon Polly, which can do the text to speech conversion and so on and so forth. So these are the AI services that can be embedded into the application so that the end user or the end customer can build AI driven applications. >> Love it. Okay. So you've got the experts at the bottom with the frameworks, the hardcore data scientists, you kind of get the self driving machine learning in the middle, and then you have all the ingredients. I'm like an AI chef or a machine learning chef. I can pull in vision, speech, chatbots, fraud detection, and sort of compile my own solutions that's cool. We hear a lot about SageMaker studio. I wonder if you could tell us a little bit more, can we double click a little bit on SageMaker? That seems to be a pretty important component of that stack that you just showed us. >> I think that was an absolutely very great summarization of all the different layers of machine unexpected. So thank you for providing the gist of that. Of course, I'll be really happy to talk about Amazon SageMaker because most of our customers are actually standardizing on top of SageMaker. That is spoken about how machine learning traditionally has so many complications and it's very complex and expensive and I traded process, which makes it even harder because they don't know integrated tools or if you do the traditional machine learning all kind of deployment, there are no integrated tools for the entire workflow process and deployment. And that is where SageMaker comes into the picture. SageMaker removes all the heaviness thing and complexities from each step of the deployment of machine learning workflow, how it solves our challenges by providing all of the different components that are optimized for every stage of the workflow into one single tool set. So that models get to production faster and with much less effort and at a lower cost. We really continue to add important (indistinct) leading to Amazon SageMaker. I think last year we announced 50 cubic litres in this far SageMaker being improvised it's features and functionalities. And I would love to call out a couple of those here, SageMaker notebooks, which are just one thing, the prominent notebooks that comes along with easy two instances, I'm sorry for quoting Jarvin here is Amazon Elastic Compute Instances. So you just need to have a one thing deployment and you have the entire SageMaker Notebook Interface, along with the Elastic Compute Instances running that gives you the faster time to production. If you're a machine, if you are a data scientist or a data engineer who worked extensively for machine learning, you must be aware about building training datasets is really complex. So there we have on his own ground truth, that is only for building machine learning training data sets, which can reduce your labeling cost by 70%. And if you perform machine learning and other model technology in general, there are some workflows where you need to do inferences. So there we have inference, Elastic Inference Incense, which you can reduce the cost by 75% by adding a little GP acceleration. Or you can reduce the cost by adding managed squad training, utilizing easy to spot instances. So there are multiple ways that you can reduce the costs and there are multiple ways there you can improvise and speed up your machine, learning deployment and workflow. >> So one of the things I love about, I mean, I'm a prime member who is not right. I love to shop at Amazon. And what I like about it is the consumer experience. It kind of helps me find things that maybe I wasn't aware of, maybe based on other patterns that are going on in the buying community with people that are similar. If I want to find a good book. It's always gives me great reviews and recommendations. So I'm wondering if that applies to sort of the tech world and machine learning, are you seeing any patterns emerge across the various use cases, you have such scale? What can you tell us about that? >> Sure. One of the battles that we have seen all the time is to build scalable layer for any kind of use case. So as I spoke before that as much, I'm really looking to put their data into a single set of depository where they have the single source of truth. So storing of data and any kind of data at any velocity into a single source of would actually help them build models who run on these data and get useful insights out of it. So when you speak about an entry and workflow, using Amazon SageMaker along bigger, scalable analytical tool is actually what we have seen as one of the factors where they can perform some analysis using Amazon SageMaker and build predictive models to say samples, if you want to take a healthcare use case. So they can build a predictive model that can victimize the readmissions of using Amazon SageMaker. So what I mean, to say is, by not moving data around and connecting different services to the same set of source of data, that's tumor avoid creating copies of data, which is very crucial when you are having training data set and test data sets with Amazon SageMaker. And it is highly important to consider this. So the pattern that we have seen is to utilize a central source of depository of data, which could be Amazon Extra. In this scenario, scalable analytical layer along with SageMaker. I would have to code at Intuit for a success story over here. I'm using sandwich, a Amazon SageMaker Intuit had reviews the machine learning deployment time by 90%. So I'm quoting here from six months to one week. And if you think about a healthcare industry, there hadn't been a shift from reactive to predictive care. So utilizing predictive models to accelerate research and discovery of new drugs and new treatments. And you've also observed that nurses were supported by AI tools increase their, their productivity has increased by 50%. I would like to say that one of our customers are really diving deep into the AWS portfolio of machine learning and AI services and including transcribed medical, where they are able to provide some insights so that their customers are getting benefits from them. Most of their customers are healthcare providers and they are able to give some into insights so that they can create some more personalized and improvise patient care. So there you have the end user benefits as well. One of the patterns that I have, I can speak about and what we have seen as well, appearing a predictive model with real time integration into healthcare records will actually help their healthcare provider customers for informed decision making and improvising the personalized patient care. >> That's a great example, several there. And I appreciate that. I mean, healthcare is one of those industries that is just so right for technology ingestion and transformation, that is a great example of how the cloud has really enabled really. I mean, I'm talking about major changes in healthcare with proactive versus reactive. We're talking about lower costs, better health, longer lives is really inspiring to see that evolve. We're going to watch it over the next several years. I wonder if we could close in the marketplace. I've had the pleasure of interviewing Dave McCann, a number of times. He and his team have built just an awesome capability for Amazon and its ecosystem. What about the data products, whether it's SageMaker or other data products in the marketplace, what can you tell us? >> Sure. Either of this market visits are interesting thing. So let me first talk about the AWS marketplace of what, AWS marketplace you can browse and search for hundreds of machine learning algorithms and machine learning, modern packages in a broad range of categories that this company provision, fixed analysis, voice answers, email, video, and it says predictive models and so on and so forth. And all of these models and algorithms can be deployed to a Jupiter notebook, which comes as part of the SageMaker that form. And you can integrate all of these different models and algorithms into our fully managed service, which is Amazon SageMaker to Jupiter notebooks, Sage maker, STK, and even command as well. And this experience is followed by either of those marketplace catalog and API. So you get the same benefits as any other marketplace products, the just seamless deployments and consolidate it. So you get the same benefits as the products and the invest marketplace for your machine learning algorithms and model packages. And this is really important because these can be darkly integrated into our SageMaker platform. And I don't even be honest about the data products as well. And I'm really happy to provide and code one of the example over here in the interest of cooler times and because we are in unprecedented times over here we collaborated with our partners to provide some data products. And one of them is data hub by tablet view that gives you the time series data of phases and depth data gathered from multiple trusted sources. And this is to provide better and informed knowledge so that everyone who was utilizing this product can make some informed decisions and help the community at the end. >> I love it. I love this concept of being able to access the data, algorithms, tooling. And it's not just about the data, it's being able to do something with the data and that we've been talking about injecting intelligence into those data marketplaces. That's what we mean by smart data marketplaces. Stuti Deshpande, thanks so much for coming to theCUBES here, sharing your knowledge and tell us a little bit about AWS. There's a pleasure having you. >> It's my pleasure too. Thank you so much for having me here. >> You're very welcome. And thank you for watching. Keep it right there. We will be right back right after this short break. (soft orchestral music)

Published Date : Sep 17 2020

SUMMARY :

brought to you by Io Tahoe. and keep heart of building in technology over the period of time. and to work with thousands What are the benefits that I'm going to and improve the support of these So I wonder if you could paint So all of the services mentioned here in the middle, and then you So that models get to production faster So one of the things I love about, So the pattern that we of how the cloud has and code one of the example And it's not just about the data, Thank you so much for having me here. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave McCannPERSON

0.99+

Stuti DeshpandePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

StutiPERSON

0.99+

90%QUANTITY

0.99+

50%QUANTITY

0.99+

JarvinPERSON

0.99+

75%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

70%QUANTITY

0.99+

200 different servicesQUANTITY

0.99+

FirstQUANTITY

0.99+

six monthsQUANTITY

0.99+

one weekQUANTITY

0.99+

each stepQUANTITY

0.99+

last yearDATE

0.99+

SageMakerTITLE

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

IntuitORGANIZATION

0.98+

bothQUANTITY

0.97+

TensorFlowTITLE

0.97+

two instancesQUANTITY

0.97+

SecondlyQUANTITY

0.97+

Io TahoePERSON

0.97+

OneQUANTITY

0.96+

single sourceQUANTITY

0.96+

Prime AirCOMMERCIAL_ITEM

0.94+

single setQUANTITY

0.93+

one thingQUANTITY

0.92+

todayDATE

0.92+

three main reasonsQUANTITY

0.92+

Elastic ComputeTITLE

0.9+

DeepRacerTITLE

0.9+

single toolQUANTITY

0.87+

50 cubic litresQUANTITY

0.85+

Elastic ComputeTITLE

0.84+

RekognitionTITLE

0.84+

Amazon With ServicesORGANIZATION

0.82+

hundreds of machine learning algorithmsQUANTITY

0.82+

three major frameworksQUANTITY

0.81+

Stuti Deshpande, AWS | Smart Data Marketplaces


 

>> Announcer: From around the globe it's theCUBE with digital coverage of smart data marketplaces brought to you by Io Tahoe. >> Hi everybody, this is Dave Vellante. And welcome back. We've been talking about smart data. We've been hearing Io Tahoe talk about putting data to work and keep heart of building great data outcomes is the Cloud of course, and also Cloud native tooling. Stuti Deshpande is here. She's a partner solutions architect for Amazon Web Services and an expert in this area. Stuti, great to see you. Thanks so much for coming on theCUBE. >> Thank you so much for having me here. >> You're very welcome. So let's talk a little bit about Amazon. I mean, you have been on this machine learning journey for quite sometime. Take us through how this whole evolution has occurred in technology over the period of time. Since the Cloud really has been evolving. >> Amazon in itself is a company, an example of a company that has gotten through a multi year machine learning transformation to become the machine learning driven company that you see today. They have been improvising on original personalization model using robotics to all different women's centers, developing a forecasting system to predict the customer needs and improvising on that and reading customer expectations on convenience, fast delivery and speed, from developing natural language processing technology for end user infraction, to developing a groundbreaking technology such as Prime Air jobs to give packages to the customers. So our goal at Amazon With Services is to take this rich expertise and experience with machine learning technology across Amazon, and to work with thousands of customers and partners to handle this powerful technology into the hands of developers or data engineers of all levels. >> Great. So, okay. So if I'm a customer or a partner of AWS, give me the sales pitch on why I should choose you for machine learning. What are the benefits that I'm going to get specifically from AWS? >> Well, there are three main reasons why partners choose us. First and foremost, we provide the broadest and the deepest set of machine learning and AI services and features for your business. The velocity at which we innovate is truly unmatched. Over the last year, we launched 200 different services and features. So not only our pace is accelerating, but we provide fully managed services to our customers and partners who can easily build sophisticated AI driven applications and utilizing those fully managed services began build and train and deploy machine learning models, which is both valuable and differentiating. Secondly, we can accelerate the adoption of machine learning. So as I mentioned about fully managed services for machine learning, we have Amazon SageMaker. So SageMaker is a fully managed service that are any developer of any level or a data scientist can utilize to build complex machine learning, algorithms and models and deploy that at scale with very less effort and a very less cost. Before SageMaker, it used to take so much of time and expertise and specialization to build all these extensive models, but SageMaker, you can literally build any complex models within just a time of days or weeks. So to increase it option, AWS has acceleration programs just in a solution maps. And we also have education and training programs such as DeepRacer, which are enforces on enforcement learning and Embark, which actually help organization to adopt machine learning very readily. And we also support three major frameworks that just tensive no charge, or they have separate teams who are dedicated to just focus on all these frameworks and improve the support of these frameworks for a wide variety of workloads. And finaly, we provide the most comprehensive platform that is optimized for machine learning. So when you think about machine learning, you need to have a data store where you can store your training sets, your test sets, which is highly reliable, highly scalable, and secure data store. Most of our customers want to store all of their data and any kind of data into a centralized repository that can be treated at the central source of fraud. And in this case from the Amazon Esri data store to build and endurance machine learning workflow. So we believe that we provide this capability of having the most comprehensive platform to build the machine learning workflow from internally. >> Great. Thank you for that. So I wanted, my next question is, this is a complicated situation for a lot of customers. You know, having the technology is one thing, but adoption is sort of everything. So I wonder if you could paint a picture for us and help us understand, how you're helping customers think about machine learning, thinking about that journey and maybe give us the context of what the ecosystem looks like? >> Sure. If someone can put up the belt, I would like to provide a picture representation of how AWS and fusion machine learning as three layers of stack. And moving on to next bill, I can talk about the bottom there. And bottom there as you can see over this screen, it's basically for advanced technologists advanced data scientists who are machine learning practitioners who work at the framework level. 90% of data scientists use multiple frameworks because multiple frameworks are adjusted and are suitable for multiple and different kinds of workloads. So at this layer, we provide support for all of the different types of frameworks. And the bottom layer is only for the advanced scientists and developers who are actually actually want to build, train and deploy these machine learning models by themselves and moving onto the next level, which is the middle layer. This layer is only suited for non-experts. So here we have seen Jamaica where it provides a fully managed service there you can build, tune, train and deploy your machine learning models at a very low cost and with very minimal efforts and at a higher scale, it removes all the complexity, having a thing and guess guesswork from this stage of machine learning and Amazon SageMaker has been the scene that will change. Many of our customers are actually standardizing on top off Amazon SageMaker. And then I'm moving on to the next layer, which is the top most layer. We call this as AI services because this may make the human recognition. So all of the services mentioned here such as Amazon Rekognition, which is basically a deep learning service optimized for image and video analysis. And then we have Amazon Polly, which can do the text to speech from Russian and so on and so forth. So these are the AI services that can be embedded into the application so that the end user or the end customer can build AI driven applications. >> Love it. Okay. So you've got the experts at the bottom with the frameworks, the hardcore data scientists, you kind of get the self driving machine learning in the middle, and then you have all the ingredients. I'm like an AI chef or a machine learning chef. I can pull in vision, speech, chatbots, fraud detection, and sort of compile my own solutions that's cool. We hear a lot about SageMaker studio. I wonder if you could tell us a little bit more, can we double click a little bit on SageMaker? That seems to be a pretty important component of that stack that you just showed us. >> I think that was an absolutely very great summarization of all the different layers of machine unexpected. So thank you for providing the gist of that. Of course, I'll be really happy to talk about Amazon SageMaker because most of our customers are actually standardizing on top of SageMaker. That is spoken about how machine learning traditionally has so many complications and it's very complex and expensive and I traded process, which makes it even harder because they don't know integrated tools or if you do the traditional machine learning all kind of deployment, there are no integrated tools for the entire workflow process and deployment. And that is where SageMaker comes into the picture. SageMaker removes all the heaviness thing and complexities from each step of the deployment of machine learning workflow, how it solves our challenges by providing all of the different components that are optimized for every stage of the workflow into one single tool set. So that models get to production faster and with much less effort and at a lower cost. We really continue to add important (indistinct) leading to Amazon SageMaker. I think last year we announced 50 cubic litres in this far SageMaker being improvised it's features and functionalities. And I would love to call out a couple of those here, SageMaker notebooks, which are just one thing, the prominent notebooks that comes along with easy two instances, I'm sorry for quoting Jarvin here is Amazon Elastic Compute Instances. So you just need to have a one thing deployment and you have the entire SageMaker Notebook Interface, along with the Elastic Compute Instances running that gives you the faster time to production. If you're a machine, if you are a data scientist or a data engineer who worked extensively for machine learning, you must be aware about building training datasets is really complex. So there we have on his own ground truth, that is only for building machine learning training data sets, which can reduce your labeling cost by 70%. And if you perform machine learning and other model technology in general, there are some workflows where you need to do inferences. So there we have inference, Elastic Inference Incense, which you can reduce the cost by 75% by adding a little GP acceleration. Or you can reduce the cost by adding managed squad training, utilizing easy to spot instances. So there are multiple ways that you can reduce the costs and there are multiple ways there you can improvise and speed up your machine, learning deployment and workflow. >> So one of the things I love about, I mean, I'm a prime member who is not right. I love to shop at Amazon. And what I like about it is the consumer experience. It kind of helps me find things that maybe I wasn't aware of, maybe based on other patterns that are going on in the buying community with people that are similar. If I want to find a good book. It's always gives me great reviews and recommendations. So I'm wondering if that applies to sort of the tech world and machine learning, are you seeing any patterns emerge across the various use cases, you have such scale? What can you tell us about that? >> Sure. One of the battles that we have seen all the time is to build scalable layer for any kind of use case. So as I spoke before that as much, I'm really looking to put their data into a single set of depository where they have the single source of truth. So storing of data and any kind of data at any velocity into a single source of would actually help them build models who run on these data and get useful insights out of it. So when you speak about an entry and workflow, using Amazon SageMaker along bigger, scalable analytical tool is actually what we have seen as one of the factors where they can perform some analysis using Amazon SageMaker and build predictive models to say samples, if you want to take a healthcare use case. So they can build a predictive model that can victimize the readmissions of using Amazon SageMaker. So what I mean, to say is, by not moving data around and connecting different services to the same set of source of data, that's tumor avoid creating copies of data, which is very crucial when you are having training data set and test data sets with Amazon SageMaker. And it is highly important to consider this. So the pattern that we have seen is to utilize a central source of depository of data, which could be Amazon Extra. In this scenario, scalable analytical layer along with SageMaker. I would have to code at Intuit for a success story over here. I'm using sandwich, a Amazon SageMaker Intuit had reviews the machine learning deployment time by 90%. So I'm quoting here from six months to one week. And if you think about a healthcare industry, there hadn't been a shift from reactive to predictive care. So utilizing predictive models to accelerate research and discovery of new drugs and new treatments. And you've also observed that nurses were supported by AI tools increase their, their productivity has increased by 50%. I would like to say that one of our customers are really diving deep into the AWS portfolio of machine learning and AI services and including transcribed medical, where they are able to provide some insights so that their customers are getting benefits from them. Most of their customers are healthcare providers and they are able to give some into insights so that they can create some more personalized and improvise patient care. So there you have the end user benefits as well. One of the patterns that I have, I can speak about and what we have seen as well, appearing a predictive model with real time integration into healthcare records will actually help their healthcare provider customers for informed decision making and improvising the personalized patient care. >> That's a great example, several there. And I appreciate that. I mean, healthcare is one of those industries that is just so right for technology ingestion and transformation, that is a great example of how the cloud has really enabled really. I mean, I'm talking about major changes in healthcare with proactive versus reactive. We're talking about lower costs, better health, longer lives is really inspiring to see that evolve. We're going to watch it over the next several years. I wonder if we could close in the marketplace. I've had the pleasure of interviewing Dave McCann, a number of times. He and his team have built just an awesome capability for Amazon and its ecosystem. What about the data products, whether it's SageMaker or other data products in the marketplace, what can you tell us? >> Sure. Either of this market visits are interesting thing. So let me first talk about the AWS marketplace of what, AWS marketplace you can browse and search for hundreds of machine learning algorithms and machine learning, modern packages in a broad range of categories that this company provision, fixed analysis, voice answers, email, video, and it says predictive models and so on and so forth. And all of these models and algorithms can be deployed to a Jupiter notebook, which comes as part of the SageMaker that form. And you can integrate all of these different models and algorithms into our fully managed service, which is Amazon SageMaker to Jupiter notebooks, Sage maker, STK, and even command as well. And this experience is followed by either of those marketplace catalog and API. So you get the same benefits as any other marketplace products, the just seamless deployments and consolidate it. So you get the same benefits as the products and the invest marketplace for your machine learning algorithms and model packages. And this is really important because these can be darkly integrated into our SageMaker platform. And I don't even be honest about the data products as well. And I'm really happy to provide and code one of the example over here in the interest of cooler times and because we are in unprecedented times over here we collaborated with our partners to provide some data products. And one of them is data hub by tablet view that gives you the time series data of phases and depth data gathered from multiple trusted sources. And this is to provide better and informed knowledge so that everyone who was utilizing this product can make some informed decisions and help the community at the end. >> I love it. I love this concept of being able to access the data, algorithms, tooling. And it's not just about the data, it's being able to do something with the data and that we've been talking about injecting intelligence into those data marketplaces. That's what we mean by smart data marketplaces. Stuti Deshpande, thanks so much for coming to theCUBES here, sharing your knowledge and tell us a little bit about AWS. There's a pleasure having you. >> It's my pleasure too. Thank you so much for having me here. >> You're very welcome. And thank you for watching. Keep it right there. We will be right back right after this short break. (soft orchestral music)

Published Date : Sep 3 2020

SUMMARY :

brought to you by Io Tahoe. and keep heart of building in technology over the period of time. and to work with thousands What are the benefits that I'm going to and improve the support of these So I wonder if you could paint So all of the services mentioned here in the middle, and then you So that models get to production faster and machine learning, are you So the pattern that we of how the cloud has and code one of the example And it's not just about the data, Thank you so much for having me here. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave McCannPERSON

0.99+

Stuti DeshpandePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

StutiPERSON

0.99+

90%QUANTITY

0.99+

50%QUANTITY

0.99+

JarvinPERSON

0.99+

75%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

200 different servicesQUANTITY

0.99+

FirstQUANTITY

0.99+

70%QUANTITY

0.99+

one weekQUANTITY

0.99+

six monthsQUANTITY

0.99+

hundredsQUANTITY

0.99+

SageMakerTITLE

0.99+

each stepQUANTITY

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

JamaicaLOCATION

0.98+

IntuitORGANIZATION

0.98+

bothQUANTITY

0.97+

two instancesQUANTITY

0.97+

SecondlyQUANTITY

0.97+

Io TahoePERSON

0.97+

OneQUANTITY

0.96+

single sourceQUANTITY

0.96+

Prime AirCOMMERCIAL_ITEM

0.94+

one thingQUANTITY

0.92+

todayDATE

0.92+

Elastic ComputeTITLE

0.92+

three main reasonsQUANTITY

0.92+

single setQUANTITY

0.9+

DeepRacerTITLE

0.89+

single toolQUANTITY

0.87+

50 cubic litresQUANTITY

0.87+

Elastic ComputeTITLE

0.86+

RekognitionTITLE

0.86+

Amazon With ServicesORGANIZATION

0.82+

JupiterORGANIZATION

0.81+

three layersQUANTITY

0.79+

SageORGANIZATION

0.78+

Chris Degnan, Snowflake & Anthony Brooks Williams, HVR | AWS re:Invent 2019


 

>>LA Las Vegas. It's the cube hovering AWS reinvent 2019 brought to you by Amazon web services and along with its ecosystem partners. >>Hey, welcome back to the cube. Our day one coverage of AWS reinvent 19 continues. Lisa Martin with Dave Volante. Dave and I have a couple of guests we'd like you to walk up. We've got Anthony Brooks billions, the CEO of HBR back on the cube. You're alumni. We should get you a pin and snowflake alumni. But Chris, your new Chris Dagon, chief revenue officer from snowflake. Chris, welcome to the program. Excited to be here. All right guys. So even though both companies have been on before, Anthony, let's start with you. Give our audience a refresher about HVR, who you guys are at, what you do. >>Sure. So we're in the data integration space, particularly a real time data integration. So we move data to the cloud in the in the most efficient way and we make sure it's secure and it's accurate and you're moving into environments such as snowflake. Um, and that's where we've got some really good customers that we happy to talk about joint custody that we're doing together. But Chris can tell us a little bit about snowflake. >>Sure. And snowflake is a cloud data warehousing company. We are cloud native, we are on AWS or on GCP and we're on Azure. And if you look at the competitive landscape, we compete with our friends at Amazon. We compete with our friends at Microsoft and our friends at Google. So it's super interesting place to be, but it very exciting at the same time and super excited to partner with Anthony and some others who aren't really a friends. That's correct. So I wonder if we could start by just talking about the data warehouse sort of trends that you guys see. When I talk to practitioners in the old days, they used to say to me things like, Oh, infrastructure management, it's such a nightmare. It's like a snake swallowing a basketball every time until it comes out with a new chips. We chase it because we just need more performance and we can't get our jobs done fast enough. And there's only three. There's three guys that we got to go through to get any answers and it was just never really lived up to the promise of 360 degree view of your business and realtime analytics. How has that changed? >>Well, there's that too. I mean obviously the cloud has had a big difference on that illustrious city. Um, what you would find is in, in, in yesterday, customers have these, a retail customer has these big events twice a year. And so to do an analysis on what's being sold and Casper's transactions, they bought this big data warehouse environment for two events a year typically. And so what's happening that's highly cost, highly costly as we know to maintain and then cause the advances in technology and trips and stuff. And then you move into this cloud world which gives you that Lester city of scale up, scale down as you need to. And then particular where we've got Tonies snowflake that is built for that environment and that elicited city. And so you get someone like us that can move this data at today's scale and volume through these techniques we have into an environment that then bleeds into helping them solve the challenge that you talk about of Yesi of >>these big clunky environments. That side, I think you, I think you kind of nailed it. I think like early days. So our founders are from Oracle and they were building Oracle AI nine nine, 10 G. and when I interviewed them I was the first sales rep showing up and day one I'm like, what the heck am I selling? And when I met them I said, tell me what the benefit of snowflake is. And they're like, well at Oracle, and we'd go talk to customers and they'd say, Oracles, you know, I have this problem with Oracle. They'd say, Hey, that's, you know, seven generations ago were Oracle. Do you have an upgraded to the latest code? So one of the things they talked about as being a service, Hey, we want to make it really easy. You never have to upgrade the service. And then to your point around, you have a fixed amount of resources on premise, so you can't all of a sudden if you have a new project, do you want to bring on the first question I asked when I started snowflake to customers was how long does it take you to kick off a net new workload onto your data, onto your Vertica and it take them nine to 12 months because they'd have to go procure the new hardware, install it, and guess what? >>With snowflake, you can make an instantaneous decision and because of our last test city, because the benefits of our partner from Amazon, you can really grow with your demand of your business. >>Many don't have the luxury of nine to 12 months anymore, Chris, because we all know if, if an enterprise legacy business isn't thinking, there's somebody not far behind me who has the elasticity, who has the appetite, who's who understands the opportunity that cloud provides. If you're not thinking that, as auntie Jessie will say, you're going to be on the wrong end of that equation. But for large enterprises, that's hard. The whole change culture is very hard to do. I'd love to get your perspective, Chris, what you're seeing in terms of industries shifting their mindsets to understand the value that they could unlock with this data, but how are big industries legacy industries changing? >>I'd say that, look, we were chasing Amad, we were chasing the cloud providers early days, so five years ago, we're selling to ad tech and online gaming companies today. What's happened in the industry is, and I'll give you a perfect example, is Ben wa and I, one of our founders went out to one of the largest investment banks on wall street five years ago, and they said, and they have more money than God, and they say, Hey, we love what you've built. We love, when are you going to run on premise? And Ben, Ben wa uttered this phrase of, Hey, you will run on the public cloud before we ever run in the private cloud. And guess what? He was a truth teller because five years later, they are one of our largest customers today. And they made the decision to move to the cloud and we're seeing financial services at a blistering face moved to the cloud. >>And that's where, you know, partnering with folks from HR is super important for us because we don't have the ability to just magically have this data appear in the cloud. And that's where we rely quite heavily on on instance. So Anthony, in the financial services world in particular, it used to be a cloud. Never that was an evil word. Automation. No, we have to have full control and in migration, never digital transformation to start to change those things. It's really become an imperative, but it's by in particular is really challenging. So I wonder if we could dig into that a little bit and help us understand how you solve that problem. >>Yes. A customer say they want to adopt some of these technologies. So there's the migration route. They may want to go adopt some of these, these cloud databases, the cloud data warehouses. And so we have some areas where we, you know, we can do that and keep the business up and running at the same time. So the techniques we use are we reading the transactional logs, other databases or something called CDC. And so there'll be an initial transfer of the bulk of the data initiative stantiating or refresh. At that same time we capturing data out of the transaction logs, wildlife systems live and doing a migration to the new environment or into snowflakes world, capturing data where it's happening, where the data is generated and moving that real time securely, accurately into this environment for somewhere like 1-800-FLOWERS where they can do this, make better decisions to say the cost is better at point of sale. >>So have all their business divisions pulling it in. So there's the migration aspects and then there's the, the use case around the realtime reporting as well. So you're essentially refueling the plane. Well while you're in mid air. Um, yeah, that's a good one. So what does the customer see? How disruptive is it? How do you minimize that disruption? Well, the good thing is, well we've all got these experienced teams like Chris said that have been around the block and a lot of us have done this. What we do, what ed days fail for the last 15 years, that companies like golden gate that we sold to Oracle and those things. And so there's a whole consultative approach to them versus just here's some software, good luck with it. So there's that aspect where there's a lot of planning that goes into that and then through that using our technologies that are well suited to this Appleton shows some good success and that's a key focus for us. And in our world, in this subscription by SAS top world, customer success is key. And so we have to build a lot of that into how we make this successful as well. >>I think it's a barrier to entry, like going, going from on premise to the cloud. That's the number one pushback that we get when we go out and say, Hey, we have a cloud native data warehouse. Like how the heck are we going to get the data to the cloud? And that's where, you know, a partnership with HR. Super important. Yeah. >>What are some of the things that you guys encountered? Because we many businesses live in the multi-cloud world most of the time, not by strategy, right? A lot of the CIO say, well we sort of inherited this, or it's M and a or it's developers that have preference. How do you help customers move data appropriately based on the value that the perceived value that it can give in what is really a multi world today? Chris, we'll start with you. >>Yeah, I think so. So as we go into customers, I think the biggest hurdle for them to move to the cloud is security because they think the cloud is not secure. So if we, if you look at our engagement with customers, we go in and we actually have to sell the value snowflake and then they say, well, okay great, go talk to the security team. And then we talked to security team and say, Hey, let me show you how we secure data. And then then they have to get comfortable around how they're going to actually move, get the data from on premise to the cloud. And that's again, when we engage with partners like her. So yeah, >>and then we go through a whole process with a customer. There's a taking some of that data in a, in a POC type environment and proving that after, as before it gets rolled out. And a lot of, you know, references and case studies around it as well. >>Depends on the customer that you have some customers who are bold and it doesn't matter the size. We have a fortune 100 customer who literally had an on premise Teradata system that they moved from on prem, from on premise 30 to choose snowflake in 111 days because they were all in. You have other customers that say, Hey, I'm going to take it easy. I'm going to workload by workload. And it just depends. And the mileage may vary is what can it give us an example of maybe a customer example or in what workloads they moved? Was it reporting? What other kinds? Yeah. >>Oh yeah. We got a couple of, you mean we could talk a little bit about 1-800-FLOWERS. We can talk about someone like Pitney Bowes where they were moving from Oracle to secret server. It's a bunch of SAP data sitting in SAP ECC. So there's some complexity around how you acquire, how you decode that data, which we ever built a unique ability to do where we can decode the cluster and pool tables coupled with our CDC technique and they had some stringent performance loads, um, that a bunch of the vendors couldn't meet the needs between both our companies. And so we were able to solve their challenge for them jointly and move this data at scale in the performance that they needed out with these articles, secret server enrollments into, into snowflake. >>I almost feel like when you have an SAP environment, it's almost stuck in SAP. So to get it out is like, it's scary, right? And this is where it's super awesome for us to do work like this. >>On that front, I wanted to understand your thoughts on transformation. It's a word, it's a theme of reinvent 2019. It's a word that we hear at every event, whether we're talking about digital transformation, workforce, it, et cetera. But one of the things that Andy Jassy said this morning was that got us start. It's this is more than technology, right? This, the next gen cloud is more than technology. It's about getting those senior leaders on board. Chris, your perspective, looking at financial services first, we were really surprised at how quickly they've been able to move. Understanding presumably that if they don't, there's going to be other businesses. But are you seeing that as the chief revenue officer or your conversations starting at that CEO level? >>It kinda has to like in the reason why if you do in bottoms up approach and say, Hey, I've got a great technology and you sell this great technology to, you know, a tech person. The reality is unless the C E O CIO or CTO has an initiative to do digital transformation and move to the cloud, you'll die. You'll die in security, you'll die in legal lawyers love to kill deals. And so those are the two areas that I see D deals, you know, slow down significantly. And that's where, you know, we, it's, it's getting through those processes and finding the champion at the CEO level, CIO level, CTO level. If you're, if you're a modern day CIO and you do not have a a cloud strategy, you're probably going to get replaced >>in 18 months. So you know, you better get on board and you'd better take, you know, taking advantage of what's happening in the industry. >>And I think that coupled with the fact that in today's world, you mean, you said there's a, it gets thrown around as a, as a theme and particularly the last couple of years, I think it's, it's now it is actually a strategy and, and reality because what Josephine is that there's as many it tech savvy people sit in the business side of organizations today that used to sit in legacy it. And I think it's that coupled with the leadership driving it that's, that's demanding it, that demanding to be able to access that certain type of data in a geo to make decisions that affect the business. Right now. >>I wonder if we could talk a little bit more about some of the innovations that are coming up. I mean I've been really hard on data. The data warehouse industry, you can tell I'm jaded. I've been around a long time. I mean I've always said that that Sarbanes Oxley saved the old school BI and data warehousing and because all the reporting requirements, and again that business never lived up to its promises, but it seems like there's this whole new set of workloads emerging in the cloud where you take a data warehouse like a snowflake, you may be bringing in some ML tools, maybe it's Databricks or whatever. You HVR helping you sort of virtualize the data and people are driving new workloads that are, that are bringing insights that they couldn't get before in near real time. What are you seeing in terms of some of those gestalt trends and how are companies taking advantage of these innovations? >>I think one is just the general proliferation of data. There's just more data and like you're saying from many different sources, so they're capturing data from CNC machines in factories, you know like like we do for someone like GE, that type of data is to data financial data that's sitting in a BU taking all of that and going there's just as boss some of data, how can we get a total view of our business and at a board level make better decisions and that's where they got put it in I snowflake in this an elastic environment that allows them to do this consolidated view of that whole organization, but I think it's largely been driven by things that digitize their sensors on everything and there's just a sheer volume of data. I think all of that coming together is what's, what's driven it >>is is data access. We talked about security a little bit, but who has rights to access the data? Is that a challenge? How are you guys solving that or is it, I mean I think it's like anything like once people start to understand how a date where we're an acid compliant date sequel database, so we whatever your security you use on your on premise, you can use the same on snowflake. It's just a misperception that the industry has that being on, on in a data center is more secure than being in the cloud and it's actually wrong. I guess my question is not so much security in the cloud, it's more what you were saying about the disparate data sources that coming in hard and fast now. And how do you keep track of who has access to the data? I mean is it another security tool or is it a partnership within owes? >>Yeah, absolutely man. So there's also, there's in financial data, there's certain geos, data leaves, certain geos, whether it be in the EU or certain companies, particularly this end, there's big banks now California, there's stuff that we can do from a security perspective in the data that we move that's secure, it's encrypted. If we capturing data from multiple different sources, items we have that we have the ability to take it all through one, one proxy in the firewall, which does, it helps him a lot in that aspect. Something unique in our technology. But then there's other tools that they have and largely you sit down with them and it's their sort of governance that they have in the, in the organization to go, how do they tackle that and the rules they set around it, you know? >>Well, last question I have is, so we're seeing, you know, I look at the spending data and my breaking analysis, go on my LinkedIn, you'll see it snowflakes off the charts. It's up there with, with robotic process automation and obviously Redshift. Very strong. Do you see those two? I think you addressed it before, but I'd love to get you on record sort of coexisting and thriving. Really, that's not the enemy, right? It's the, it's the Terra data's and the IBM's and the Oracles. The, >>I think, look, uh, you know, Amazon, our relationship with Amazon is like a, you know, a 20 year marriage, right? Sometimes there's good days, sometimes there's bad days. And I think, uh, you know, every year about this time, you know, we get a bat phone call from someone at Amazon saying, Hey, you know, the Redshift team's coming out with a snowflake killer. And I've heard that literally for six years now. Um, it turns out that there's an opportunity for us to coexist. Turns out there's an opportunity for us to compete. Um, and it's all about how they handle themselves as a business. Amazon has been tremendous in separation of that, of, okay, are going to partner here, we're going to compete here, and we're okay if you guys beat us. And, and so that's how they operate. But yes, it is complex and it's, it's, there are challenges. >>Well, the marketplace guys must love you though because you're selling a lot of computers. >>Well, yeah, yeah. This is three guys. They, when they left, we have a summer thing. You mean NWS have a technological DMS, their data migration service, they work with us. They refer opportunities to us when it's these big enterprises that are use cases, scale complexity, volume of data. That's what we do. We're not necessary into the the smaller mom and pop type shops that just want to adopt it, and I think that's where we all both able to go coexist together. There's more than enough. >>All right. You're right. It's like, it's like, Hey, we have champions in the Esri group, the EEC tuna group, that private link group, you know, across all the Amazon products. So there's a lot of friends of ours. Yeah, the red shift team doesn't like us, but that's okay. I can live in >>healthy coopertition, but it just goes to show that not only do customers and partners have toys, but they're exercising it. Gentlemen, thank you for joining David knee on the key of this afternoon. We appreciate your time. Thank you for having us. Pleasure our pleasure for Dave Volante. I'm Lisa Martin. You're watching the queue from day one of our coverage of AWS reinvent 19 thanks for watching.

Published Date : Dec 3 2019

SUMMARY :

AWS reinvent 2019 brought to you by Amazon web services Dave and I have a couple of guests we'd like you to walk up. So we move data to the cloud in the in the most efficient way and we make sure it's secure and And if you look at the competitive landscape, And then you move into this cloud world which gives you that Lester city of scale to customers was how long does it take you to kick off a net new workload onto your data, from Amazon, you can really grow with your demand of your business. Many don't have the luxury of nine to 12 months anymore, Chris, And they made the decision to move to the cloud and we're seeing financial services And that's where, you know, partnering with folks from HR is super important for us because And so we have some areas where we, And so we have to build a lot of that into how we make this successful And that's where, you know, a partnership with HR. What are some of the things that you guys encountered? And then we talked to security team and say, Hey, let me show you how we secure data. And a lot of, you know, references and case studies around it as well. Depends on the customer that you have some customers who are bold and it doesn't matter the size. So there's some complexity around how you acquire, how you decode that data, I almost feel like when you have an SAP environment, it's almost stuck in SAP. But are you seeing that And that's where, you know, So you know, you better get on board and you'd better take, you know, taking advantage of what's happening And I think that coupled with the fact that in today's world, you mean, you said there's a, it gets thrown around as a, like there's this whole new set of workloads emerging in the cloud where you take a factories, you know like like we do for someone like GE, that type of is not so much security in the cloud, it's more what you were saying about the disparate in the organization to go, how do they tackle that and the rules they set around it, Well, last question I have is, so we're seeing, you know, I look at the spending data and my breaking analysis, separation of that, of, okay, are going to partner here, we're going to compete here, and we're okay if you guys to us when it's these big enterprises that are use cases, scale complexity, that private link group, you know, across all the Amazon products. Gentlemen, thank you for joining David knee on the key of this afternoon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Lisa MartinPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VolantePERSON

0.99+

OracleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AnthonyPERSON

0.99+

BenPERSON

0.99+

Andy JassyPERSON

0.99+

Chris DagonPERSON

0.99+

DavidPERSON

0.99+

JessiePERSON

0.99+

six yearsQUANTITY

0.99+

three guysQUANTITY

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

HBRORGANIZATION

0.99+

NWSORGANIZATION

0.99+

Ben waPERSON

0.99+

Chris DegnanPERSON

0.99+

first questionQUANTITY

0.99+

Anthony BrooksPERSON

0.99+

OraclesORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

111 daysQUANTITY

0.99+

CasperORGANIZATION

0.99+

nineQUANTITY

0.99+

five years laterDATE

0.99+

12 monthsQUANTITY

0.99+

oneQUANTITY

0.99+

five years agoDATE

0.99+

yesterdayDATE

0.99+

LinkedInORGANIZATION

0.99+

twoQUANTITY

0.99+

EsriORGANIZATION

0.99+

AWSORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

both companiesQUANTITY

0.99+

GEORGANIZATION

0.98+

two areasQUANTITY

0.98+

bothQUANTITY

0.98+

18 monthsQUANTITY

0.98+

todayDATE

0.98+

SASORGANIZATION

0.98+

twice a yearQUANTITY

0.98+

golden gateORGANIZATION

0.97+

JosephinePERSON

0.96+

EULOCATION

0.96+

TerraORGANIZATION

0.95+

threeQUANTITY

0.93+

day oneQUANTITY

0.93+

this morningDATE

0.93+

one proxyQUANTITY

0.93+

CTOORGANIZATION

0.93+

LA Las VegasLOCATION

0.92+

C E O CIOORGANIZATION

0.91+

Anthony Brooks WilliamsPERSON

0.91+

Kalyan Ramanathan, Sumo Logic | Sumo Logic Illuminate 2019


 

>> Narrator: From Burlingame, California, it's theCUBE. Covering Sumo Logic Illuminate 2019. Brought to you by Sumo Logic. >> Hey, welcome back, everybody, Jeff Frick here with theCUBE. We're at Sumo Logic Illuminate 2019. It's at the Hyatt Regency San Francisco Airport. We're excited to be back. It's our second year, so third year of the show, and really, one of the key tenants of this whole event is the report. It's the fourth year of the report. It's The Continuous Intelligence Report, and here to tell us all about it is the VP of Product Marketing, Kalyan Ramanathan. He's, like I said, VP, Product Management of Sumo Logic. Great to see you again. >> All right, thank you, Jeff. >> What a beautiful report. >> Absolutely, I love the cover and I love the data in the report even more. >> Yeah, but you cheat, you cheat. >> How come? >> 'Cause it's not a survey. You guys actually take real data. >> Ah, that's exactly right, exactly right. >> No, I love them, let's jump into it. No, it's a pretty interesting fact, though, and it came out in the keynote that this is not a survey. Tell us how you get the data. >> Yeah, I mean, so as you already know, Sumo Logic is a continuous intelligence platform. And what we do is to help our customers manage the operations and security of the mission critical application. And the way we do that is by collecting machine data from our customers, and many of our customers, we have two thousand, our customers, they're all running modern applications in the cloud, and when we collect this machine data, we can grade insights into how are these customers building their applications, how are these customers running and securing their application, and that insight is what is reflected in this report. And so, you're exactly right, this is not a survey. This is data from our customers that we bring into our system and then what we do is really treat things once we get this data into our system. First and foremost, we completely anonymize this data. So, we don't-- >> I was going to say Let's make sure we have to get that out. >> Yes, absolutely, so we don't have any customer references in this data. Two, we genericize this data. So, we're not looking for anomalies. We are looking for broad patterns, broad trends that we can apply across all of our customers and all of these enterprises that are running modern mission critical applications in the cloud. And then three, we analyze ten weeks to Sunday. We look at these datas, we look at what stands out in terms of good sample sizes, and that's what we reflect in this report. >> Okay, and just to close a loop on that, are there some applications that you don't include? 'Cause they're just legacy applications that're running on the cloud that doesn't give you good information, or you're basically taking them all in? >> Yeah, it's a good point, I mean we collect all data and we collect all applications, so we don't opt-in applications or out applications for that matter because we don't care about it. But what we do look for is significant sample size because we want to make sure that we're not talking about onesie-twosie applications here or there. We're looking for applications that have significant eruption in the cloud and that's what gets reflected in this report. >> Okay, well, let's jump into it. We don't have time to go through the whole thing here now, but people can get it online. They can download their own version and go through it at their leisure. Biggest change from last year as the fourth year of the report. >> Yeah, I mean, look, there are three big insights that we see in this report. The first one is, while we continue to see AWS rule in the cloud and that's not surprising at all, we're starting to see pretty dramatic adoption of multi-cloud technologies. So, two years ago, we saw a smidgen of multi-cloud in this report. Now, we have seen almost a 50% growth year over year in terms of multi-cloud adoption amongst enterprises who are in the cloud, and that's a substantial jump albeit from a smaller baseline. >> Do you have visibility if those are new applications or are those existing ones that are migrating to different platforms? Are they splitting? Do you have any kind of visibility into that? >> Yeah, I mean, it's an interesting point, and part of this is very related to the growth of Kubernetes that we also see in this report. What ypu've seen is that, in AWS itself, Kubernetes adoption has gone up significantly, what's even more interesting is that, as you think about multi-cloud adoption, we see a lot of Kubernetes, Kubernetes as the platform that is driving this multi-cloud adoption. There is a very interesting chart in this report on page nine. Obviously, I think you guys can see this if they want to download the report. If you're looking at AWS only, we see one in five customers are adopting Kubernetes. If you're looking at AWS and GCP, Google Cloud Platform, we see almost 60% of our customers are adopting Kubernetes. Now, when you put in AWS-- >> One in five at AWS, 60% we got Google, so that means four out of five at GCP are using Kubernetes and bring that average up. >> And then, if you look at AWS, Azure, and GCP, now you're talking about the creme de la creme customers who want to adopt all three clouds, it's almost 80% adoption of Kubernetes, so what it tells you is that Kubernetes has almost become this new Linux in the cloud world. If I want to deploy my application across multiple clouds, guess what, Kubernetes is that platform that enables me to deploy my application and then port it and re-target it to any other cloud or, for that matter, even an on-prem environment. >> Now, I mean, you don't see motivation behind action, but I'm just curious how much of it is now that I have Kubernetes. I can do multi-cloud or I've been wanting to do multi-cloud, and now that I have Kubernetes, I have an avenue. >> Yeah, it started another question. What's the chicken and what's the egg right here? My general sense, and we've debated this endlessly in our company, our general sense has been that the initiative to go multi-cloud typically comes top down in an organization. It's usually the CIO or the CSO who says, you know what, we need to go multi-cloud. And there are various reasons to go multi-cloud, some of which you heard in our keynote today. It could be for more reliability, it could be for more choice that you may want, it could be because you don't want to get logged into any one cloud render, so that decision usually comes top down. But then, now, the engineering teams, the ops teams have to support that decision, and what these engineering teams and these ops teams have realized is that, if they deploy Kubernetes, they have a very good option available now to port their applications very easily across these various cloud platforms. So, Kubernetes, in some sense, is supporting the top down decision to go multi-cloud which is something that is shown in spades as a result of this report. >> So, another thing that jumped out at me, or is there another top trend you want to make sure we cover before we get in some of those specifics? >> I mean we can talk to-- >> Yeah, one of them, one of them that jumped out at me was Docker. The Docker adoption. So, Docker was the hottest thing since sliced bread about four years ago, and is the shade of Kubernetes, not that they're replacements for one another specifically, but it definitely put a little bit of appall in the buzz that was the Docker, yet here, the Docker utilization, Docker use is growing year over year. 30%! >> I'll be the first one to tell you that Docker adoption has not stalled at all. This is shown in the report. It's shown in customers that we talk to. I mean, everyone is down the path of containerizing their application. The value of Docker is indisputable. That I get better agility, that I get better portability with Docker cannot be questioned. Now, what is indeed happening is that everyone who is deploying Docker today is choosing a orchestration technology and that orchestration technology happens to be Kubernetes. Again, Kubernetes is the king of the hill. If I'm deploying Docker, I'm deploying Kubernetes along with it. >> Okay, another one that jumped out at me, which shouldn't be a big surprise, but I'm a huge fan of Andy Jassy, we do all the AWS shows, and one of always the shining moments is he throws up the slide, he's got the Customer slide. >> There you go. >> It's the Services slide which is, in like, 2.6 font across a 100-foot screen that fills Las Vegas, and yet, your guys' findings is that it's really: the top ten applications are the vast majority of the AWS offerings that are being consumed. >> Yep, not just that. It's that the top services in AWS are the infrastructure-as-a-service services. These are the core services that you need if you have to build an application in AWS. You need ECDO, I need Esri, I need identity access management. Otherwise, I can't even log into AWS. So, this again goes back to that first point that I was making was that multi-cloud adoption is top of mind for many, many customers right now. It's something that many enterprises think of, and so, if I want to indeed be able to port my application from AWS to any other environment, guess what I should be doing? I shouldn't be adopting every AWS service out there because if I frankly adopted all these AWS services, the tentacles of the cloud render are just so that I will not be able to port away from my cloud render to any other cloud service out there. So, to a certain extent, many of the data points that we have in this report support the story that enterprises are becoming more conscious of the cloud platform choices that they are making. They want to at least keep an option of adopting the second or the third cloud out there, and they're consciously, therefore choosing the services that they are building their applications with. >> So, another hot topic, right? Computer 101 is databases. We're just up the road from Oracle. Oracle OpenWorld's next week. A lot of verbal jabs between Oracle and some of the cloud providers on the databases, et cetera. So, what do the database findings come back as? >> I mean, look at the top four databases: Redis, MySQL, Postgres, Mongo. You know what's common across them? They're all open-source. They're all open-source database, so if you're building your application, find standard components that you can then build your application on, whether it's a community that you can then take and move to any other cloud that you want to. That's takeaway number one. Takeaway number two, look at where Oracle is in this report. I think they're the eighth database in the cloud. I actually talked to a few customers of ours today. >> Now, are you sampling from Oracle's cloud? Is that a dataset? >> No, this is-- >> Yes, right, okay. So, I thought I want to make sure. >> And, if AWS is almost the universe of cloud today, we can debate at some bids, but it is close enough, I'd say, it tells you where Oracle is in this cloud universe, so our friends at Redwood City may talk about cloud day in and day out, but it's very clear that they're not making much of intent in the cloud at this point. >> And then, is this the first year the rollup of the type of database that NoSQL exceeded relational database? >> No, I mean, we've been doing this for the last two years, and it's very clear that NoSQL is ahead of SQL in the cloud, and I think the way we think about it is primarily because, when you are re-architecting your applications in the cloud, the cloud gives you a timeline, it gives you an opportunity to reconsider how you build out your data layer, and many of our customers are saying NoSQL is the way to go. The scalability demands, the reliability demands, so if my application was such that I now have the opportunity to rethink and redo my data layer, and frankly, NoSQL is winning the game. >> Right, it's winning big time. Another big one: serverless, Lambda. Actually, I'm kind of surprised it took so long to get to Lambda 'cause we've been going to smaller atomic units of compute, store, and networking for so, so long, but it sounds like, looks like we're starting to hit some critical mass here. >> Yeah, I mean, look, Lambda's ready for primetime. I mean we have seen that tipping point out here. Almost one in three customers of ours are using Lambda in production environments. And then, if you cast a wider net, go beyond production and even look at dev tests, what we see is that almost 60% of Sumo Logic's customers, and if you look at 2,000 customers, that's a pretty big sample size. Almost 60% of enterprises are using Lambda in some way, shape, or form. So, I think it's not surprising that Lambda is getting used quite well in the enterprise. The question really is: what are these people doing with Lambda? What's the intent behind the use of Lambda? And that's where I think we have to do some more research. My general sense, and I think it's shared widely within Sumo Logic, is that Lambda's still at the edges of the application. It's not at the core of the application. People are not building your mission critical application on Lambda yet because I think that that paradigm of thinking about event-driven application is still a little foreign to many organizations, so I think it'll take a few more years for an entire application to be built on Lambda. >> But you would think, if it's variable demand applications, whether that's a marketing promotion around the Super Bowl or running the books at the end of the month, I guess it's easy enough to just fire up the servers versus doing a pure Lambda at this point in time, but it seems like a natural fit. >> If you're doing the utility type application and you want to start it and you want to kill it and not use it after an event has come and gone, absolutely, Lambda's the way to go. The economics of Lambda. Lambda absolutely makes sense. Having said that, I mean, if you're to build a true mission critical application that you're going to be keeping on for a while to come, I'm not seeing a lot of that in Lambda yet, but it's definitely getting there. I mean we have lots of customers who are building some serious stuff on Lambda. >> Well, a lot of great information. It's nice to have the longitudinal aspect as you do this year over year, and again, we're glad you're cheating 'cause you're getting good data. >> (chuckles) >> (laughs) You're not asking people questions. >> Yeah, I mean, I'd like to finish out by saying this is a report that Sumo Logic builds every year, not because we want to sell Sumo Logic. It's because we want to give back to our community. We want our community to build great apps. We want them to understand how their peers are building some amazing mission critical apps in the cloud and so, please download this report, learn from how your peers are doing things, and that's our only intent and goal from this report. >> Great, well, thanks for sharing the information and a great catch-up, nice event. >> All right, thank you very much, Jeff. >> All right, he's Kalyan, I'm Jeff. You're watching theCUBE. We're at Sumo Logic Illuminate 2019. Thanks for watching, we'll see you next time. (upbeat electronic music)

Published Date : Sep 12 2019

SUMMARY :

Brought to you by Sumo Logic. and really, one of the key tenants and I love the data in the report even more. 'Cause it's not a survey. and it came out in the keynote that this is not a survey. And the way we do that is by collecting Let's make sure we have to get that out. that we can apply across all of our customers that have significant eruption in the cloud as the fourth year of the report. that we see in this report. the growth of Kubernetes that we also see in this report. so that means four out of five at GCP and re-target it to any other cloud and now that I have Kubernetes, I have an avenue. it could be for more choice that you may want, and is the shade of Kubernetes, and that orchestration technology happens to be Kubernetes. and one of always the shining moments of the AWS offerings that are being consumed. These are the core services that you need and some of the cloud providers on the databases, et cetera. and move to any other cloud that you want to. So, I thought I want to make sure. much of intent in the cloud at this point. and many of our customers are saying NoSQL is the way to go. to get to Lambda 'cause we've been going and if you look at 2,000 customers, or running the books at the end of the month, and you want to start it and again, we're glad you're cheating You're not asking people questions. are building some amazing mission critical apps in the cloud and a great catch-up, nice event. Thanks for watching, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Sumo LogicORGANIZATION

0.99+

Kalyan RamanathanPERSON

0.99+

Andy JassyPERSON

0.99+

secondQUANTITY

0.99+

KalyanPERSON

0.99+

30%QUANTITY

0.99+

last yearDATE

0.99+

60%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

Super BowlEVENT

0.99+

NoSQLTITLE

0.99+

100-footQUANTITY

0.99+

fourth yearQUANTITY

0.99+

OneQUANTITY

0.99+

third yearQUANTITY

0.99+

threeQUANTITY

0.99+

2,000 customersQUANTITY

0.99+

FirstQUANTITY

0.99+

ten weeksQUANTITY

0.99+

fourQUANTITY

0.99+

Burlingame, CaliforniaLOCATION

0.99+

LambdaTITLE

0.99+

TwoQUANTITY

0.99+

next weekDATE

0.99+

fiveQUANTITY

0.99+

five customersQUANTITY

0.99+

three customersQUANTITY

0.98+

ECDOTITLE

0.98+

second yearQUANTITY

0.98+

first oneQUANTITY

0.98+

two years agoDATE

0.98+

two thousandQUANTITY

0.98+

EsriTITLE

0.98+

SundayDATE

0.98+

todayDATE

0.98+

first pointQUANTITY

0.98+

KubernetesORGANIZATION

0.97+

KubernetesTITLE

0.97+

MySQLTITLE

0.97+

third cloudQUANTITY

0.96+

almost 80%QUANTITY

0.96+

LinuxTITLE

0.96+

SQLTITLE

0.95+

Redwood CityLOCATION

0.95+

Chris Hallenbeck, SAP | Nutanix .NEXT EU 2018


 

(futuristic electronic music) >> Live from London, England, it's theCUBE covering .Next Conference Europe 2018. Brought to you buy Nutanix. >> Welcome back to Nutanix .Next 2018 in beautiful London, England. I'm Stu Miniman, my co-host Joep Piscaer, and happy to welcome back to the program, third time guest, I believe, Chris Hallenbeck, who's the senior vice president of database and data management with SAP. Fresh off the keynote stage this morning. Were you were with CEO Dheeraj Panday? >> I was, a great time. >> So, SAP, things are going well. I see SAP at lots of shows. You've been on our program at a few different ones. You are based here in Europe now, you're from the US. Chris, introduce us a little bit. Give us some of the summary of what brings you specifically to the event. >> Well, I mean, several things. So, my responsibility is looking after data platform. And what we're doing from a strategy perspective, what we're doing, what applications we're building on that in the cloud, what we're doing, everyone asks what are you doing with HANA? What are you doing with Data Hub? And so that's the core of what I spend time on. But equally I think you need to step back and look at SAP's business 'cause we're also, we're our own OEM, right? HANA's what makes S4 possible. HANA's what powers all of our cloud applications. We're going to announce now that everyone one of those, everyone of the acquired companies now runs on HANA and not on any other database. And so you really see these three pillars of SAP. You talk about I've been with SAP seven years ago, and everyone said, why would you go there? Because there's this old applications company that seems to be getting, oh, and even Hasso Plattner, our founder, was saying that was true. Came out with HANA, that we quickly streamed up. Passed Teradata, become the number four database company in the world. Still growing phenomenally. They used HANA as a method of rejuvenation for originally S4 and now that's gone to the cloud. And during that time, we were able to acquire all these cloud applications and build those, SuccessFactors, Ariba, and other stuff, and that's become a wildly successful business. >> Yeah, Chris, I wanted to step back for a second because you talk about data products. >> Yeah. >> You know, I've watched databases for my entire career. I've watched the huge growth of the importance of data. Especially the last few years. You know, we went through that big data wave, which was kind of middle end success, but everything today, data is the center of it all. You know database is where a lot of data live, but how am I getting, and how are customer getting more advantage out of their data when they are using your products? >> It's a great question. So, one is it continues to be the fact that now, people now have realtime access to that information. And it continues to actually be the biggest driver, to be honest. The other one where we see HANA getting picked, especially, is when you have tens or even hundreds of data feeds coming in simultaneously. Frequently, some are streaming, some are traditionally relational, coming from all different systems, and people then want to do analytics on that. But when we talk about analytics, I don't just mean a BI tool, although you could, but now we're doing predictive on that. And, in fact, and then figuring out how does a data scientist then go through, do machine learning, build a model, deploy for scoring, from a full lifecycle perspective. And that's where HANA's getting used tremendously, is in these analytic systems, and data warehousing, and in particularly people going, I want a realtime data warehouse. The other one where we see it being a lot more is in applications where HANA originally was only for SAP applications. We got a huge amount of work on that to make it work for OEM, ISVs, to port their applications over. And you've been seeing that continuously. I think there's some phenomenal work we've done with Esri. HANA's now the fastest geospatial database in the world. And Esri has about 80% of the geospatial market. Now prefers and runs on HANA. So that's been huge. So customers are beginning to use it in more areas. Not just SAP customers, or the CIO who ran the SAP systems, we're getting used a lot by the chief data officer's division. We're getting used out by other groups. We're getting used by specialty firms doing things like geospatial, doing text analytics. And so it's been kind of exciting. I don't know if I answered your question, by the way, but-- >> No, I think that was really good. >> So that sounds like you positioned yourself to enable customers to make the most out of the cloud, make the most out of data, make the most out of IoT. But I'm curious, how are helping customers succeed in that digital transformation? >> Yeah, well, with the digital transformation, and the way I always look at digital transformation, well, it's like big data, what does it mean, right? But what you see the patterns are is people are trying to remove layers between them and the actual consumer or the product. And if I can take those layers out, now you have people like Netflix who went all the way from just saying, let's make it easier to get a DVD, but now they are the movie studio directly to the consumer. They got rid of the 18-year-old kid at the video store, they got rid of everything through streaming. They went out on the, business. They took out all these layers and got closer. Whether it's Airbnb and all these pure plays, that's exactly, they've reduced the number of layers. Our existing customers are trying to do the same thing. They're saying, how do I get closer? How do I understand them? That requires, like if I'm running machinery, IoT data will tell me exactly how they use my machinery. If I can then start to take a look at that, now they want to work with me in different ways. Customers dictate how they're going to work with me. That means if they want to come over the web one time, other time they want to phone, they should always be treated equally based on how important they are to me. Reducing layers. Equally, though, you always have to be worried about someone coming out of nowhere, the pure play that comes in with a brilliant idea in your division, and you can't let 'em just take you out. So what we're seeing is these traditional companies, not necessarily know what the digital transformation is, but saying, I've basically got to get fit. And I can't do that with a really complicated landscape. If my department says, oh, that's great, new business model? We got to have the accounting up and ready in three years to compete with this new entrant. It's not going to work. Yet you upgrade your systems, and let's say SAP is financials, somebody comes up with a new business model, that's a day change in the system. You want to reorganize, that's a few clicks in the system, and I have a new hierarchy. That used to be a two year process. And so we working in all different aspects. We can do the IoT, we can do the agile work, we can have the data science machine learning understand the customer, all the way back to the applications that are agile now as people upgrade to the S4 system. >> Alright, I want to bring us back to the Nutanix show here, Chris. >> We like Nutanix, let's help them here. >> That's great, let's talk about platforms out there. You have applications that they all want to get certified on. Your application certified on their platform, so it's always, okay, am I SAP certified? And, okay, Nutanix even went through some redesign in there file system to make sure that they run really well for HANA and we're real excited for the certification there. Talk a little bit about what goes into that. Is there joint efforts between the companies? Or is it just their going through and following the process that you've got to describe? >> While I was on stage with Dheeraj and this wasn't, although it's nice to say supported database, this was a year and half effort. In memory computing, people get in and go, okay, it's not just a big data cache, this is a fundamentally different way software runs. How data stored in memory uses caches. So Nutanix worked with us, back and forth, on we would have this happen. Now it was worth it to us. Our customers have been demanding simpler infrastructure. And these hyper-converged infrastructures are exactly that. And Nutanix being the leader, we wanted to be supportive. This is good for both of us. If our customers can have agility on both sides of the business, running traditional SAP applications, they've got to ramp up, they need to add 100,000 users at quarter end, they can do that with a Nutanix platform. Equally, they want to quickly bring up an agile data mark for project basis, click a button, have a new data mark in seven minutes like they did on stage. And maybe they don't even want to do that when they're on on-prem/cloud. They want to do that on AWS or somewhere, GCP, they can do that. Yet that's all controlled from a single interface running through Nutanix. So really, really good for both of us. >> SAP is partial with a lot of companies out there, so you have kind of a neutral view when it comes down to everything. I'm sure you have certain partners you work more with and less. But what are you hearing from your customers? How do they think of cloud today? And any more about the Nutanix connection along the way. >> Yeah, it's interesting 'cause talk about data density, the most valuable data a company has is sitting, you typically, if they're an SAP customer, it's in their SAP system. It's exactly who is my customer, what did they buy, what is their service, what is their bill of material? All that, it's very value dense. It's the huge amount of security governance. What we've actually been seeing is a lot of them, yes, we're moving those workloads to the cloud to save money, I've actually seen a fair number come back on-premise. 'Cause they're saying, look, I'm not getting rid of SAP for easily the next seven, but we have no plans. So then they're realizing, I can run this on a private cloud infrastructure and actually save a ton of money. So they've been pulling back on prem, and we've been hearing that from all, the Forrester, and Gartner, and IDC are saying the same things. We have a lot of folks who don't want to go to the cloud with that core system yet, or they're saying, look, I got to save money and I think I'm going to the cloud, but I'm not ready. And so that's exactly where we see private cloud being really, really crucial, and then the ability to then push out and be ready to go to the cloud. Nutanix really is a good solution for that. And in particular, on-prem database right now, depends who you get your estimates on, is roughly growing at 5% to 8%, five year kay-ger. On-prem private cloud is forecasted to go up 26%. I mean, that is massive. Cloud's only 40 overall for databases. So you see it's a close second. So, huge, huge growth. What's declining is bare metal on-prem, it's gone. Everyone wants to run an either virtualized or fully hyper-converged infrastructure now, even on-prem. So we see people, like I said, staying on, getting ready to go to the cloud. A lot of people pushing workloads to the cloud, but even some repatriation. >> Alright, well, Chris Hallenbeck, really appreciate the updates. Thanks for everything and-- >> Well, thanks for having me. I always love speaking with you guys, thank you. >> Awesome, thanks so much. Joep Piscaer, I'm Stu Miniman, we'll be back with more programming from Nutanix .Next 2018, thanks for watching theCUBE. (futuristic buzzing) (futuristic electronic music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you buy Nutanix. and happy to welcome back to the program, brings you specifically to the event. And so that's the core of what I spend time on. because you talk about data products. Especially the last few years. And it continues to actually be the biggest driver, that was really good. So that sounds like you positioned yourself but now they are the movie studio directly to the consumer. to the Nutanix show here, Chris. You have applications that they all want to get certified on. And Nutanix being the leader, we wanted to be supportive. And any more about the Nutanix connection and be ready to go to the cloud. really appreciate the updates. I always love speaking with you guys, thank you. we'll be back with more programming

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Chris HallenbeckPERSON

0.99+

Joep PiscaerPERSON

0.99+

EuropeLOCATION

0.99+

IDCORGANIZATION

0.99+

Dheeraj PandayPERSON

0.99+

ForresterORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

GartnerORGANIZATION

0.99+

USLOCATION

0.99+

5%QUANTITY

0.99+

two yearQUANTITY

0.99+

tensQUANTITY

0.99+

HANATITLE

0.99+

NutanixORGANIZATION

0.99+

100,000 usersQUANTITY

0.99+

seven minutesQUANTITY

0.99+

London, EnglandLOCATION

0.99+

26%QUANTITY

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

8%QUANTITY

0.99+

three yearsQUANTITY

0.99+

third timeQUANTITY

0.99+

SAPORGANIZATION

0.99+

a year and halfQUANTITY

0.98+

NetflixORGANIZATION

0.98+

18-year-oldQUANTITY

0.98+

both sidesQUANTITY

0.98+

five yearQUANTITY

0.98+

one timeQUANTITY

0.97+

about 80%QUANTITY

0.97+

todayDATE

0.97+

TeradataORGANIZATION

0.96+

single interfaceQUANTITY

0.96+

seven years agoDATE

0.95+

S4TITLE

0.94+

AirbnbORGANIZATION

0.94+

40QUANTITY

0.93+

SAPTITLE

0.92+

2018DATE

0.91+

Hasso PlattnerORGANIZATION

0.88+

DheerajPERSON

0.86+

oneQUANTITY

0.86+

hundreds of data feedsQUANTITY

0.86+

EsriORGANIZATION

0.85+

Data HubORGANIZATION

0.85+

threeQUANTITY

0.8+

CEOPERSON

0.8+

AribaTITLE

0.77+

last few yearsDATE

0.74+

this morningDATE

0.73+

Conference Europe 2018EVENT

0.72+

NutanixTITLE

0.72+

agileTITLE

0.67+

CJ Smith, Riverside Public Utilities | PI World 2018


 

>> Announcer: From San Francisco, it's theCUBE! Covering OSIsoft PI World 2018. Brought to you by OSIsoft. >> Hey welcome back everybody Jeff Frick here with theCUBE. We're at OSIsoft's PI World 2018 in downtown San Francisco, they've been at it for decades and decades and decades talking really about OT and efficiency. And we're excited to be here it's our first time, and really want to talk to a customer, excited to have our next customer CJ Smith, She's a Project Manager for the city of Riverside CJ great to see you. >> Thank you, hi! >> So you represent a whole slew of mid-sized US cities, so how big is Riverside for people that aren't familiar? >> We serve 120,000 customers so we're not too small, but we're definitely not as big as some of the other cities. >> Right and then as we said before we turned on the cameras, you guys have a whole department for utilities, you have your own utility as well. >> Yes we do have a public utility division within the city, also an IT and public works, parks and recs like other cities as well. But we do have the utility, which is different than some of the stand along utilities, like LADWP for example. >> Right but it's good you were saying off camera that that gives you guys a nice revenue source, so it's a nice asset for the city to have. >> Yeah the utility is revenue generating department. >> Okay so what are you doing here at PI World, how are you guys using OSI software? >> So we started down PI back in August 2016, as an enterprise agreement customer, and at that time we really lacked visibility into our system so we needed something to help us gather the data and make sense of it, because we had data all over the place, and it was hard to answer simple questions it was hard to find simple data. And so we started down the PI journey at that time, and we basically used it like a data hub to aggregate data, turn that data into information, and then we disseminate it using dashboards. So PI Vision dashboards which used to be PI Coresight, as well as reports. >> So what were some of the early data sources that you leveraged, that you saw the biggest opportunity to get started, or yet even more importantly your earliest successes where'd your early success come from? >> So our very first work group that we worked with was our Water Operations and our Water SCADA team. >> Seems to be a pattern here a lot of water talk here at OSIsoft. >> Yeah I'll talk about electricity too. But we started on water and the first thing we did was implement their data, it was called a Water Operations dashboard, and they were doing it manually in Excel, and it would take a staff person over eight hours to do it. And they would do it the next day for the previous day data. So imagine how opposite of real time that is right? So we integrated that data with PI. >> And how many data elements? How big is the spreadsheet this poor person is working on? >> So the Water SCADA tags that we brought in were near 1500 tags, so you imagine that much data and calculations with over 1500 calculations behind it. So it was a ton of effort. >> Right. >> And a huge quick win for them! So it's saved staff time, they now have actual intelligence, real time data, the managers get alerts to their phones about the status of wells, and so it was really helpful to that work group. So that one was one of our first and earliest wins on PI. >> Was it a hard sell? To those people to use it? It wasn't because we did find a champion in that group, someone that would help us. Actually the manager he was very interested in technology and automation. And they understood that even though it would be a time investment up front, it would save them a ton of time in the long run, for the rest of the year. And so one of the things that helped us get buy-in early on is that we used an Agile approach. So we would tell the manager, I only need you for five weeks. I need you and your staff for five weeks, and then you don't have to talk to us anymore. We will deliver the product in five weeks, we will do all the work, but if you could give us five weeks of your time, then you could have all your time back the rest of the year. And that helped us get buy-in from the managers and a commitment, because they can identify with okay just five weeks. >> Right so those were probably the operational folks, what about on the IT folks how was getting buy-in from the IT folks? >> The funny thing is and the thing we did different is, we have a great relationship with IT, and we really forged a partnership with them early on, even from the very beginning when we were just reviewing the agreement. We got their buy-in early on to say okay, this is what we're thinking about doing, we want you to be part of the team, and we really built a partnership with this project so that it could be successful. So they work hand in hand with our PI implementation team every step of the way. They've been on this journey every step of the way with us. So we don't have some of the challenges that other companies that I hear are talking a lot about here with IT and it kind of being a bottleneck, we didn't have that same experience because we really worked hard up front to have the buy-in with them and really build a partnership with them, so that they're implementing PI with us. And another selling point with that is, we're using PI as a data hub or like a bus, a data bus essentially. So for them it's good because we're saying look we're only going to have this point to point system, instead of having all of these individual points we're only going to connect to one system, which will be easier for them to manage and maintain, and we'll instruct staff to go to PI to get the data. So that's a selling point for IT it's more secure, it's more manageable. >> And did you use an outside integrator, or did you guys do it all in house? >> Our implementation team is a combination of in house staff and a consulting firm as well. >> And then it's curious 'cause then you said once you add all the data it's kind of a data bus, how long did it take for somebody to figure out hmmm this is pretty cool maybe there's data set number two, data set number three, data set number four? >> So right after our first six week implementation, we rolled out a new implementation every four to six weeks. >> Every four to six weeks? >> Yeah so we did a sprint cycle the whole first year, and actually the whole second year we're currently in right now, and so we touched a different work group every single time, delivering a new solution to them. So we picked up a lot of traction so much that now, other departments in the city want it, public works is asking for it, the city manager's office so it's really picking up some good buzz, and we're kind of working our way down discussion of smart city talks, and seeing how PI can support smart city, big data advanced analytic initiatives at the city. >> So what are some of the favorite examples of efficiency gains, or savings that department A got that now department B sees and they want to get a piece of that what are some of your favorite success stories? >> I would say two of mine, I shared one on the big stage yesterday about the superpower I talked about our operations manager, who started receiving actionable intelligence overnight. And he got an alert around midnight, and he called his operator and said hey, what's going on with that well? And the operator said very puzzled, how do you know that there's something going on with this well? And he replied and said because I have superpowers. And so his superpower was PI, and that's one of my favorite stories because it's just simple and it resonates with people, because he is receiving alerts and push notifications that he never had before to his mobile device at home. So that's a huge win. >> Was the operator tied in to that same notification, or did that person know before the operator? >> The manager knew before the operator. So the operator didn't know about PI at the time and we had just rolled it out. And so the manager was just kind of testing it and adopting it, and so it was kind of like he had a leg up a little bit and they were confused like how do you know you're at home? >> Man: Right. >> He's like I have superpowers. (laughing) It's probably my funniest and best story, and one that I always tell because it helps everyone, no matter if it's an executive to a field person, really understand the power behind PI. I think another one if I had to pick another example of a win that I think was powerful is, our work order and field map. So we have our field crews right now that have a map, that's powered from our work order and asset management system pushing data to PI, which then pushes it to Esri through the PI integrator, and they're out using it in the field and it helps them route their work, they can see where their workers are, they can see customer information. And that map is really changing the way the field crews work. So imagine a day before this system where, they would go in and have to print every work order from the system. And not all asset management systems are really user friendly. They're kind of archaic a little clunky, so I won't say the name of our system. >> And doesn't work well if there's a change right? >> Yeah and they're not really mobile friendly. So that's part of the challenge, but because of that now public works wants that map, parks and rec every department that has field forces, they want something similar so that they can get all the data from all the other systems in one app in one location on their device. >> And do you find that's kind of a system pattern, where often department A needs very similar to what department B needed with just a slight twist? So it's pretty easy to make minor modifications to leverage work across a bunch of different departments? >> Absolutely a lot of work groups are similar, maybe a little different like you said, but especially those that have field forces. Sometimes it makes it easy to sell it to the next group, it's like look this is what we've done, is this something that you kind of need? Or what would you need differently? Like we've developed field collection tools. That's easy to replicate. Once you see it it's easy to say you know what that works but I need it to say this and I need it to say this. If you just show them a white paper, it's hard for them to say this is what I need. Most people just don't know, but it's easy once you see a suit to say oh I don't like that tie I don't like that shirt, I don't like those pants. >> But something close. >> Yeah but something like that right? So that's the benefit once you start having a solution to easily modify and reproduce. And then the good thing about Agile, you're running sprints so you're learning every sprint. You're kind of learning as you go, and you're able to refine it and refine it and make the process that much better. >> Right. On the superpower thing employee retention is a challenge, getting good people is a challenge, I'm just curious how that impacts the folks working for you, that now suddenly they do have this new tool that does allow them to do their job better, and it's not just talk it's actually real and gave that person a head up on the actual operation person sitting on the monitor devices. So as it proliferates what is the impact on morale, and are more people rising up to say hey, I want to use it for this I want to use it for that. >> Yeah we are getting a lot of interest, and I think the challenge is, and I talked about this a little bit during my session, is change management and culture. Some people see automation and technology as sometimes a threat because of job security, or the I've always done it this way type of mentality. >> Man: Never a good answer. >> Right but once you kind of get them to see that we're just automating your process to make it better so that you can do cooler and better things, so that you can actually analyze the data instead of inputting data. So you can actually solve problems versus spending all your time trying to identify the data and collect information. So staff are starting to see the value, and after the first year and a half, we've gotten a lot of traction. I don't really have to sell it as much, it's now such a huge part of our culture that the first question when we want to implement a new system is does that integrate with PI? I don't even have to ask them. Everyone else is asking well have you thought about using PI for that? So we always kind of look to PI first to say, can we create this solution in PI? And then if not we look at other solutions and if we're looking at other solutions we say, does that solution integrate with PI? So that's become part of our norm to make sure that it plays nice with what we're calling our foundational technology which is PI. >> Right so you talked a lot about departments. Is there kind of a cross-department city level play that you're rolling data and or dashboards into something that's a higher level than just the department level? >> Yeah so far the only thing that we have done that's kind of cross divisional not just in one division, is our overtime dashboards. So we recently created overtime dashboards throughout the entire city so that executive level department heads have visibility into overtime, which just gives them trends so that they can know what departments are receiving the most overtime? Is that overtime associated with what type of cause? Was it something outside of our control? Was it a planned overtime? And then most importantly where we're trending. Where are we on track to be by the end of the year, given our current rate so that they can be proactive in making changes. Do we need to do something different? Do we need to hire more people in this department? Do we have too many people in this department? Can we make shifts? So it's giving that level of visibility, and that's a new rollout that we just have completed, but it's something that we're already seeing a lot of interest in doing more of. Cross divisional things so that the city manager's office and that level has more view into the whole city. >> Right well CJ it sounds like you're doing a lot of fun stuff down at Riverside. >> Woman: We are we are! >> And you can never save enough water in California, so that's very valuable work. >> Woman: That's true! >> Well thanks for taking a minute and sharing your story, I really enjoyed it. >> Thank you for having me. >> Absolutely she's CJ Smith I'm Jeff Frick, you're watching theCUBE from OSIsoft PI World 2018 in San Francisco, thanks for watching. (upbeat music)

Published Date : Apr 28 2018

SUMMARY :

Brought to you by OSIsoft. for the city of Riverside as some of the other cities. Right and then as we said of the stand along utilities, so it's a nice asset for the city to have. Yeah the utility is and at that time we group that we worked with Seems to be a pattern here and the first thing So the Water SCADA tags that the managers get alerts to their phones And so one of the things of the way with us. of in house staff and a we rolled out a new implementation and so we touched a different that he never had before to And so the manager was just kind of and one that I always tell So that's part of the challenge, but it's easy once you see a suit to say and make the process that much better. and gave that person a head and I talked about this a so that you can actually analyze the data Right so you talked so that the city manager's a lot of fun stuff down at Riverside. And you can never save I really enjoyed it. in San Francisco, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

August 2016DATE

0.99+

CaliforniaLOCATION

0.99+

five weeksQUANTITY

0.99+

OSIsoftORGANIZATION

0.99+

CJ SmithPERSON

0.99+

ExcelTITLE

0.99+

CJ SmithPERSON

0.99+

San FranciscoLOCATION

0.99+

six weeksQUANTITY

0.99+

yesterdayDATE

0.99+

120,000 customersQUANTITY

0.99+

first questionQUANTITY

0.99+

one systemQUANTITY

0.99+

twoQUANTITY

0.99+

first timeQUANTITY

0.99+

San FranciscoLOCATION

0.99+

second yearQUANTITY

0.99+

first yearQUANTITY

0.99+

one appQUANTITY

0.99+

first six weekQUANTITY

0.99+

PI WorldORGANIZATION

0.99+

oneQUANTITY

0.98+

one divisionQUANTITY

0.98+

over eight hoursQUANTITY

0.98+

over 1500 calculationsQUANTITY

0.98+

Riverside Public UtilitiesORGANIZATION

0.98+

one locationQUANTITY

0.97+

first thingQUANTITY

0.97+

firstQUANTITY

0.96+

next dayDATE

0.96+

decadesQUANTITY

0.96+

AgileTITLE

0.95+

RiversideLOCATION

0.95+

USLOCATION

0.95+

first work groupQUANTITY

0.94+

first year and a halfQUANTITY

0.94+

PI World 2018EVENT

0.94+

Water SCADAORGANIZATION

0.91+

endDATE

0.9+

1500 tagsQUANTITY

0.89+

every single timeQUANTITY

0.87+

OSITITLE

0.87+

CJPERSON

0.87+

PIEVENT

0.85+

OSIsoft PI World 2018EVENT

0.82+

around midnightDATE

0.81+

a minuteQUANTITY

0.79+

one ofQUANTITY

0.76+

every fourQUANTITY

0.73+

RiversideORGANIZATION

0.69+

Every fourQUANTITY

0.69+

LADWPTITLE

0.68+

a dayDATE

0.64+

theCUBEORGANIZATION

0.63+

WaterORGANIZATION

0.63+

favorite storiesQUANTITY

0.61+

WorldTITLE

0.58+

EsriORGANIZATION

0.56+

dataQUANTITY

0.55+

PIORGANIZATION

0.55+

dayDATE

0.49+

2018EVENT

0.44+

number threeQUANTITY

0.4+

fourQUANTITY

0.39+

SCADATITLE

0.27+

Paul Appleby, Kinetica | Big Data SV 2018


 

>> Announcer: From San Jose, it's theCUBE. (upbeat music) Presenting Big Data, Silicon Valley, brought to you by Silicon Angle Media and its ecosystem partners. >> Welcome back to theCUBE. We are live on our first day of coverage of our event, Big Data SV. This is our tenth Big Data event. We've done five here in Silicon Valley. We also do them in New York City in the fall. We have a great day of coverage. We're next to where the Startup Data conference is going on at Forger Tasting Room and Eatery. Come on down, be part of our audience. We also have a great party tonight where you can network with some of our experts and analysts. And tomorrow morning, we've got a breakfast briefing. I'm Lisa Martin with my co-host, Peter Burris, and we're excited to welcome to theCUBE for the first time the CEO of Kinetica, Paul Appleby. Hey Paul, welcome. >> Hey, thanks, it's great to be here. >> We're excited to have you here, and I saw something marketer, and terms, I grasp onto them. Kinetica is the insight engine for the extreme data economy. What is the extreme data economy, and what are you guys doing to drive insight from it? >> Wow, how do I put that in a snapshot? Let me share with you my thoughts on this because the fundamental principals around data have changed. You know, in the past, our businesses are really validated around data. We reported out how our business performed. We reported to our regulators. Over time, we drove insights from our data. But today, in this kind of extreme data world, in this world of digital business, our businesses need to be powered by data. >> So what are the, let me task this on you, so one of the ways that we think about it is that data has become an asset. >> Paul: Oh yeah. >> It's become an asset. But now, the business has to care for, has to define it, care for it, feed it, continue to invest in it, find new ways of using it. Is that kind of what you're suggesting companies to think about? >> Absolutely what we're saying. I mean, if you think about what Angela Merkel said at the World Economic Forum earlier this year, that she saw data as the raw material of the 21st century. And talking about about Germany fundamentally shifting from being an engineering, manufacturing centric economy to a data centric economy. So this is not just about data powering our businesses, this is about data powering our economies. >> So let me build on that if I may because I think it gets to what, in many respects Kinetica's Core Value proposition is. And that is, is that data is a different type of an asset. Most assets are characterized by, you apply it here, or you apply it there. You can't apply it in both places at the same time. And it's one of the misnomers of the notion of data as fuels. Because fuel is still an asset that has certain specificities, you can't apply it to multiple places. >> Absolutely. >> But data, you can, which means that you can copy it, you can share it. You can combine it in interesting ways. But that means that the ... to use data as an asset, especially given the velocity and the volume that we're talking about, you need new types of technologies that are capable of sustaining the quality of that data while making it possible to share it to all the different applications. Have I got that right? And what does Kinetica do in that regard? >> You absolutely nailed it because what you talked about is a shift from predictability associated with data, to unpredictability. We actually don't know the use cases that we're going to leverage for our data moving forward, but we understand how valuable an asset it is. And I'll give you two examples of that. There's a company here, based in the Bay Area, a really cool company called Liquid Robotics. And they build these autonomous aquatic robots. And they've carried a vast array of senses and now we're collecting data. And of course, that's hugely powerful to oil and gas exploration, to research, to shipping companies, etc. etc. etc. Even homeland security applications. But what they did, they were selling the robots, and what they realized over time is that the value of their business wasn't the robots. It was the data. And that one piece of data has a totally different meaning to a shipping company than it does to a fisheries companies. But they could sell that exact same piece of data to multiple companies. Now, of course, their business has grown on in Scaldon. I think they were acquired by Bowing. But what you're talking about is exactly where Kinetica sits. It's an engine that allows you to deal with the unpredictability of data. Not only the sources of data, but the uses of data, and enables you to do that in real time. >> So Kinetica's technology was actually developed to meet some intelligence needs of the US Army. My dad was a former army ranger airborne. So tell us a little bit about that and kind of the genesis of the technology. >> Yeah, it's a fascinating use case if you think about it, where we're all concerned, globally, about cyber threat. We're all concerned about terrorist threats. But how do you identity terrorist threats in real time? And the only way to do that is to actually consume vast amount of data, whether it's drone footage, or traffic cameras. Whether it's mobile phone data or social data. but the ability to stream all of those sources of data and conduct analytics on that in real time was, really, the genesis of this business. It was a research project with the army and the NSA that was aimed at identifying terrorist threats in real time. >> But at the same time, you not only have to be able to stream all the data in and do analytics on it, you also have to have interfaces and understandable approaches to acquiring the data, because I have a background, some background in that as well, to then be able to target the threat. So you have to be able to get the data in and analyze it, but also get it out to where it needs to be so an action can be taken. >> Yeah, and there are two big issues there. One issue is the inter-offer ability of the platform and the ability for you to not only consume data in real time from multiple sources, but to push that out to a variety of platforms in real time. That's one thing. The other thing is to understand that in this world that we're talking about today, there are multiple personas that want to consume that data, and many of them are not data scientists. They're not IT people, they're business people. They could be executives, or they could be field operatives in the case of intelligence. So you need to be able to push this data out in real time onto platforms that they consume, whether it's via mobile devices or any other device for that matter. >> But you also have to be able to build applications on it, right? >> Yeah, absolutely. >> So how does Kinetica facilitate that process? Because it looks more like a database, which is, which is, it's more than that, but it satisfies some of those conventions so developers have an afinity for it. >> Absolutely, so in the first instance, we provide tools ourselves for people to consume that data and to leverage the power of that data in real time in an incredibly visual way with a geospatial platform. But we also create the ability for a, to interface with really commonly used tools, because the whole idea, if you think about providing some sort of ubiquitous access to the platform, the easiest way to do that is to provide that through tools that people are used to using, whether that's something like Tablo, for example, or Esri, if you want to talk about geospatial data. So the first instance, it's actually providing access, in real time, through platforms that people are used to using. And then, of course, by building our technology in a really, really open framework with a broadly published set of APIs, we're able to support, not only the ability for our customers to build applications on that platform, and it could well be applications associated with autonomous vehicles. It could well be applications associated with Smart City. We're doing some incredible things with some of the bigger cities on the planet and leveraging the power of big data to optimize transportation, for example, in the city of London. It's those sorts of things that we're able to do with the platform. So it's not just about a database platform or an insights engine for dealing with these complex, vast amounts of data, but also the tools that allow you to visualize and utilize that data. >> Turn that data into an action. >> Yeah, because the data is useless until you're doing something with it. And that's really, if you think about the promise of things like smart grid. Collecting all of that data from all of those smart sensors is absolutely useless until you take an action that is meaningful for a consumer or meaningful in terms of the generational consumption of power. >> So Paul, as the CEO, when you're talking to customers, we talk about chief data officer, chief information officer, chief information security officer, there's a lot, data scientist engineers, there's just so many stakeholders that need access to the data. As businesses transform, there's new business models that can come into development if, like you were saying, the data is evaluated and it's meaningful. What are the conversations that you're having, I guess I'm curious, maybe, which personas are the table (Paul laughs) when you're talking about the business values that this technology can deliver? >> Yeah, that's a really, really good question because the truth is, there are multiple personas at the table. Now, we, in the technology industry, are quite often guilty of only talking to the technology personas. But as I've traveled around the world, whether I'm meeting with the world's biggest banks, the world's biggest Telco's, the world's biggest auto manufacturers, the people we meet, more often than not, are the business leaders. And they're looking for ways to solve complex problems. How do you bring the connected card alive? How do you really bring it to life? One car traveling around the city for a full day generates a terabyte of data. So what does that really mean when we start to connect the billions of cars that are in the marketplace in the framework of connected car, and then, ultimately, in a world of autonomous vehicles? So, for us, we're trying to navigate an interesting path. We're dragging the narrative out of just a technology-based narrative speeds and feeds, algorithms, and APIs, into a narrative about, well what does it mean for the pharmaceutical industry, for example? Because when you talk to pharmaceutical executives, the holy grail for the pharma industry is, how do we bring new and compelling medicines to market faster? Because the biggest challenge for them is the cycle times to bring new drugs to market. So we're helping companies like GSK shorten the cycle times to bring drugs to market. So they're the kinds of conversations that we're having. It's really about how we're taking data to power a transformational initiative in retail banking, in retail, in Telco, in pharma, rather than a conversation about the role of technology. Now, we always needs to deal with the technologists. We need to deal with the data scientists and the IT executives, and that's an important part of the conversation. But you would have seen, in recent times, the conversation that we're trying to have is far more of a business conversation. >> So if I can build on that. So do you think, in your experience, and recognizing that you have a data management tool with some other tools that helps people use the data that gets into Kinetica, are we going to see the population of data scientists increase fast enough so our executives don't have to become familiar with this new way of thinking, or are executives going to actually adopt some of these new ways of thinking about the problem from a data risk perspective? I know which way I think. >> Paul: Wow, >> Which way do you think? >> It's a loaded question, but I think if we're going to be in a world where business is powered by data, where our strategy is driven by data, our investment decisions are driven by data, and the new areas of business that we explored to creat new paths to value are driven by data, we have to make data more accessible. And if what you need to get access to the data is a whole team of data scientists, it kind of creates a barrier. I'm not knocking data scientists, but it does create a barrier. >> It limits the aperture. >> Absolutely, because every company I talk to says, "Our biggest challenge is, we can't get access to the data scientists that we need." So a big part of our strategy from the get go was to actually build a platform with all of these personas in mind, so it is built on this standard principle, the common principles of a relational database, that you're built around anti-standard sequel. >> Peter: It's recognizable. >> And it's recognizable, and consistent with the kinds of tools that executives have been using throughout their careers. >> Last question, we've got about 30 seconds left. >> Paul: Oh, okay. >> No pressure. >> You have said Kinetica's plan is to measure the success of the business by your customers' success. >> Absolutely. >> Where are you on that? >> We've begun that journey. I won't say we're there yet. We announced three weeks ago that we created a customer success organization. We've put about 30% of the company's resources into that customer success organization, and that entire team is measured not on revenue, not on project delivered on time, but on value delivered to the customer. So we baseline where the customer is at. We agree what we're looking to achieve with each customer, and we're measuring that team entirely against the delivery of those benefits to the customer. So it's a journey. We're on that journey, but we're committed to it. >> Exciting. Well, Paul, thank you so much for stopping by theCUBE for the first time. You're now a CUBE alumni. >> Oh, thank you, I've had a lot of fun. >> And we want to thank you for watching theCUBE. I'm Lisa Martin, live in San Jose, with Peter Burris. We are at the Forger Tasting Room and Eatery. Super cool place. Come on down, hang out with us today. We've got a cocktail party tonight. Well, you're sure to learn lots of insights from our experts, and tomorrow morning. But stick around, we'll be right back with our next guest after a short break. (CUBE theme music)

Published Date : Mar 7 2018

SUMMARY :

brought to you by Silicon Angle Media the CEO of Kinetica, Paul Appleby. We're excited to have you here, You know, in the past, our businesses so one of the ways that we think about it But now, the business has to care for, that she saw data as the raw material of the 21st century. And it's one of the misnomers of the notion But that means that the ... is that the value of their business wasn't the robots. and kind of the genesis of the technology. but the ability to stream all of those sources of data So you have to be able to get the data in of the platform and the ability for you So how does Kinetica facilitate that process? but also the tools that allow you to visualize Yeah, because the data is useless that need access to the data. is the cycle times to bring new drugs to market. and recognizing that you have a data management tool and the new areas of business So a big part of our strategy from the get go and consistent with the kinds of tools is to measure the success of the business the delivery of those benefits to the customer. for stopping by theCUBE for the first time. We are at the Forger Tasting Room and Eatery.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PaulPERSON

0.99+

Peter BurrisPERSON

0.99+

Lisa MartinPERSON

0.99+

PeterPERSON

0.99+

Angela MerkelPERSON

0.99+

San JoseLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

KineticaORGANIZATION

0.99+

Paul ApplebyPERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

LondonLOCATION

0.99+

New York CityLOCATION

0.99+

TelcoORGANIZATION

0.99+

tomorrow morningDATE

0.99+

One issueQUANTITY

0.99+

US ArmyORGANIZATION

0.99+

NSAORGANIZATION

0.99+

21st centuryDATE

0.99+

Liquid RoboticsORGANIZATION

0.99+

tonightDATE

0.99+

first instanceQUANTITY

0.99+

todayDATE

0.99+

Bay AreaLOCATION

0.99+

CUBEORGANIZATION

0.99+

fiveQUANTITY

0.99+

two examplesQUANTITY

0.99+

first dayQUANTITY

0.99+

both placesQUANTITY

0.99+

billions of carsQUANTITY

0.99+

GSKORGANIZATION

0.98+

One carQUANTITY

0.98+

three weeks agoDATE

0.98+

each customerQUANTITY

0.98+

two big issuesQUANTITY

0.98+

first timeQUANTITY

0.97+

earlier this yearDATE

0.97+

tenthQUANTITY

0.96+

BowingORGANIZATION

0.96+

Startup DataEVENT

0.96+

oneQUANTITY

0.96+

EsriTITLE

0.95+

Big DataEVENT

0.94+

about 30 secondsQUANTITY

0.93+

about 30%QUANTITY

0.93+

TabloTITLE

0.93+

World Economic ForumEVENT

0.92+

one thingQUANTITY

0.92+

theCUBEORGANIZATION

0.88+

2018DATE

0.87+

Big Data SVEVENT

0.84+

a terabyteQUANTITY

0.81+

one piece of dataQUANTITY

0.77+

Forger Tasting Room andORGANIZATION

0.73+

Big Data SVORGANIZATION

0.72+

EateryLOCATION

0.7+

TastingORGANIZATION

0.67+

GermanyLOCATION

0.67+

dataQUANTITY

0.65+

ForgerLOCATION

0.65+

RoomLOCATION

0.56+

CEOPERSON

0.55+

KineticaCOMMERCIAL_ITEM

0.45+

EateryORGANIZATION

0.43+

ScaldonORGANIZATION

0.38+