Image Title

Search Results for www confluent IO:

Is Supercloud an Architecture or a Platform | Supercloud2


 

(electronic music) >> Hi everybody, welcome back to Supercloud 2. I'm Dave Vellante with my co-host John Furrier. We're here at our tricked out Palo Alto studio. We're going live wall to wall all day. We're inserting a number of pre-recorded interviews, folks like Walmart. We just heard from Nir Zuk of Palo Alto Networks, and I'm really pleased to welcome in David Flynn. David Flynn, you may know as one of the people behind Fusion-io, completely changed the way in which people think about storing data, accessing data. David Flynn now the founder and CEO of a company called Hammerspace. David, good to see you, thanks for coming on. >> David: Good to see you too. >> And Dr. Nelu Mihai is the CEO and founder of Cloud of Clouds. He's actually built a Supercloud. We're going to get into that. Nelu, thanks for coming on. >> Thank you, Happy New Year. >> Yeah, Happy New Year. So I'm going to start right off with a little debate that's going on in the community if you guys would bring out this slide. So Bob Muglia early today, he gave a definition of Supercloud. He felt like we had to tighten ours up a little bit. He said a Supercloud is a platform, underscoring platform, that provides programmatically consistent services hosted on heterogeneous cloud providers. Now, Nelu, we have this shared doc, and you've been in there. You responded, you said, well, hold on. Supercloud really needs to be an architecture, or else we're going to have this stove pipe of stove pipes, really. And then you went on with more detail, what's the information model? What's the execution model? How are users going to interact with Supercloud? So I start with you, why architecture? The inference is that a platform, the platform provider's responsible for the architecture? Why does that not work in your view? >> No, the, it's a very interesting question. So whenever I think about platform, what's the connotation, you think about monolithic system? Yeah, I mean, I don't know whether it's true or or not, but there is this connotation of of monolithic. On the other hand, if you look at what's a problem right now with HyperClouds, from the customer perspective, they're very complex. There is a heterogeneous world where actually every single one of this HyperClouds has their own architecture. You need rocket scientists to build a cloud applications. Always there is this contradiction between cost and performance. They fight each other. And I'm quoting here a former friend of mine from Bell Labs who work at AWS who used to say "Cloud is cheap as long as you don't use it too much." (group chuckles) So clearly we need something that kind of plays from the principle point of view the role of an operating system, that seats on top of this heterogeneous HyperCloud, and there's nothing wrong by having these proprietary HyperClouds, think about processors, think about operating system and so on, so forth. But in order to build a system that is simple enough, I think we need to go deeper and understand. >> So the argument, the counterargument to that, David, is you'll never get there. You need a proprietary system to get to market sooner, to solve today's problem. Now I don't know where you stand on this platform versus architecture. I haven't asked you, but. >> I think there are aspects of both for sure. I mean it needs to be an architecture in the sense that it's broad based and open and so forth. But you know, platform, you could say as long as people can instantiate it themselves, on their own infrastructure, as long as it's something that can be deployed as, you know, software defined, you don't want the concept of platform being the monolith, you know, combined hardware and software. So it really depends on what you're focused on when you're saying platform, you know, I'd say as long as they software defined thing, to where it can literally run anywhere. I mean, because I really think what we're talking about here is the original concept of cloud computing. The ability to run anything anywhere, without having to care about the physical infrastructure. And what we have today is not that, the cloud today is a big mainframe in the sky, that just happens to be large enough that once you select which region, generally you have enough resources. But, you know, nowadays you don't even necessarily have enough resources in one region. and then you're kind of stuck. So we haven't really gotten to that utility model of computing. And you're also asked to rewrite your application, you know, to abandon the conveniences of high performance file access. You got to rewrite it to use object storage stuff. We have to get away from that. >> Okay, I want to just drill on that, 'cause I think I like that point about, there's not enough availability, but on the developer cloud, the original AWS premise was targeting developers, 'cause at that time, you have to provision a Sun box get a Cisco DSU/CSU, now you get on the cloud. But I think you're giving up the scale question, 'cause I think right now, scale is huge, enterprise grade versus cloud for developers. >> That's Right. >> Because I mean look at, Amazon, Azure, they got compute, they got storage, they got queuing, and some stuff. If you're doing a startup, you throw your app up there, localhost to cloud, no big deal. It's the scale thing that gets me- >> And you can tell by the fact that, in regions that are under high demand, right, like in London or LA, at least with the clients we work with in the median entertainment space, it costs twice as much for the exact same cloud instances that do the exact same amount of work, as somewhere out in rural Canada. So why is it you have such a cost differential, it has to do with that supply and demand, and the fact that the clouds aren't really the ability to run anything anywhere. Even within the same cloud vendor, you're stuck in a specific region. >> And that was never the original promise, right? I mean it was, we turned it into that. But the original promise was get rid of the heavy lifting of IT. >> Not have to run your own, yeah, exactly. >> And then it became, wow, okay I can run anywhere. And then you know, it's like web 2.0. You know people say why Supercloud, you and I talked about this, why do you need a name for Supercloud? It's like web 2.0. >> It's what Cloud was supposed to be. >> It's what cloud was supposed to be, (group laughing and talking) exactly, right. >> Cloud was supposed to be run anything anywhere, or at least that's what we took it as. But you're right, originally it was just, oh don't have to run your own infrastructure, and you can choose somebody else's infrastructure. >> And you did that >> But you're still bound to that. >> Dave: And People said I want more, right? >> But how do we go from here? >> That's, that's actually, that's a very good point, because indeed when the first HyperClouds were designed, were designed really focus on customers. I think Supercloud is an opportunity to design in the right way. Also having in mind the computer science rigor. And we should take advantage of that, because in fact actually, if cloud would've been designed properly from the beginning, probably wouldn't have needed Supercloud. >> David: You wouldn't have to have been asked to rewrite your application. >> That's correct. (group laughs) >> To use REST interfaces to your storage. >> Revisist history is always a good one. But look, cloud is great. I mean your point is cloud is a good thing. Don't hold it back. >> It is a very good thing. >> Let it continue. >> Let it go as as it is. >> Yeah, let that thing continue to grow. Don't impose restrictions on the cloud. Just refactor what you need to for scale or enterprise grade or availability. >> And you would agree with that, is that true or is it problem you're solving? >> Well yeah, I mean it, what the cloud is doing is absolutely necessary. What the public cloud vendors are doing is absolutely necessary. But what's been missing is how to provide a consistent interface, especially to persistent data. And have it be available across different regions, and across different clouds. 'cause data is a highly localized thing in current architecture. It only exists as rendered by the storage system that you put it in. Whether that's a legacy thing like a NetApp or an Isilon or even a cloud data service. It's localized to a specific region of the cloud in which you put that. We have to delocalize data, and provide a consistent interface to it across all sites. That's high performance, local access, but to global data. >> And so Walmart earlier today described their, what we call Supercloud, they call it the Walmart cloud native platform. And they use this triplet model. They have AWS and Azure, no, oh sorry, no AWS. They have Azure and GCP and then on-prem, where all the VMs live. When you, you know, probe, it turns out that it's only stateless in the cloud. (John laughs) So, the state stuff- >> Well let's just admit it, there is no such thing as stateless, because even the application binaries and libraries are state. >> Well I'm happy that I'm hearing that. >> Yeah, okay. >> Because actually I have a lot of debate (indistinct). If you think about no software running on a (indistinct) machine is stateless. >> David: Exactly. >> This is something that was- >> David: And that's data that needs to be distributed and provided consistently >> (indistinct) >> Across all the clouds, >> And actually, it's a nonsense, but- >> Dave: So it's an illusion, okay. (group talks over each other) >> (indistinct) you guys talk about stateless. >> Well, see, people make the confusion between state and persistent state, okay. Persistent state it's a different thing. State is a different thing. So, but anyway, I want to go back to your point, because there's a lot of debate here. People are talking about data, some people are talking about logic, some people are talking about networking. In my opinion is this triplet, which is data logic and connectivity, that has equal importance. And actually depending on the application, can have the center of gravity moving towards data, moving towards what I call execution units or workloads. And connectivity is actually the most important part of it. >> David: (indistinct). >> Some people are saying move the logic towards the data, some other people, and you are saying actually, that no, you have to build a distributed data mesh. What I'm saying is actually, you have to consider all these three variables, all these vector in order to decide, based on application, what's the most important. Because sometimes- >> John: So the application chooses >> That's correct. >> Well it it's what operating systems were in the past, was principally the thing that runs and manages the jobs, the job scheduler, and the thing that provides your persistent data (indistinct). >> Okay. So we finally got operating system into the equation, thank you. (group laughs) >> Nelu: I actually have a PhD in operating system. >> Cause what we're talking about is an operating system. So forget platform or architecture, it's an operating environment. Let's use it as a general term. >> All right. I think that's about it for me. >> All right, let's take (indistinct). Nelu, I want ask you quick, 'cause I want to give a, 'cause I believe it's an operating system. I think it's going to be a reset, refactored. You wrote to me, "The model of Supercloud has to be open theoretical, has to satisfy the rigors of computer science, and customer requirements." So unique to today, if the OS is going to be refactored, it's not going to be, may or may not be Red Hat or somebody else. This new OS, obviously requirements are for customers too but is what's the computer science that is needed? Where are we, what's the missing? Where's the science in this shift? It's not your standard OS it's not like an- (group talks over each other) >> I would beg to differ. >> (indistinct) truly an operation environment. But the, if you think about, and make analogies, what you need when you design a distributed system, well you need an information model, yeah. You need to figure out how the data is located and distributed. You need a model for the execution units, and you need a way to describe the interactions between all these objects. And it is my opinion that we need to go deeper and formalize these operations in order to make a step forward. And when we design Supercloud, and design something that is better than the current HyperClouds. And actually that is when we design something better, you make a system more efficient and it's going to be better from the cost point of view, from the performance point of view. But we need to add some math into all this customer focus centering and I really admire AWS and their executive team focusing on the customer. But now it's time to go back and see, if we apply some computer science, if you try to formalize to build a theoretical model of cloud, can we build a system that is better than existing ones? >> So David, how do you- >> this is what I'm saying. >> That's a good question >> How do You see the operating system of a, or operating environment of a decentralized cloud? >> Well I think it's layered. I mean we have operating systems that can run systems quite efficiently. Linux has sort of one in the data center, but we're talking about a layer on top of that. And I think we're seeing the emergence of that. For example, on the job scheduling side of things, Kubernetes makes a really good example. You know, you break the workload into the most granular units of compute, the containerized microservice, and then you use a declarative model to state what is needed and give the system the degrees of freedom that it can choose how to instantiate it. Because the thing about these distributed systems, is that the complexity explodes, right? Running a piece of hardware, running a single server is not a problem, even with all the many cores and everything like that. It's when you start adding in the networking, and making it so that you have many of them. And then when it's going across whole different data centers, you know, so, at that level the way you solve this is not manually (group laughs) and not procedurally. You have to change the language so it's intent based, it's a declarative model, and what you're stating is what is intended, and you're leaving it to more advanced techniques, like machine learning to decide how to instantiate that service across the cluster, which is what Kubernetes does, or how to instantiate the data across the diverse storage infrastructure. And that's what we do. >> So that's a very good point because actually what has been neglected with HyperClouds is really optimization and automation. But in order to be able to do both of these things, you need, I'm going back and I'm stubborn, you need to have a mathematical model, a theoretical model because what does automation mean? It means that we have to put machines to do the work instead of us, and machines work with what? Formula, with algorithms, they don't work with services. So I think Supercloud is an opportunity to underscore the importance of optimization and automation- >> Totally agree. >> In HyperCloud, and actually by doing that, we can also have an interesting connotation. We are also contributing to save our planet, because if you think right now. we're consuming a lot of energy on this HyperClouds and also all this AI applications, and I think we can do better and build the same kind of application using less energy. >> So yeah, great point, love that call out, the- you know, Dave and I always joke about the old, 'cause we're old, we talk about, you know, (Nelu Laughs) old history, OS/2 versus DOS, okay, OS's, OS/2 is silly better, first threaded OS, DOS never went away. So how does legacy play into this conversation? Because I buy the theoretical, I love the conversation. Okay, I think it's an OS, totally see it that way myself. What's the blocker? Is there a legacy that drags it back? Is the anchor dragging from legacy? Is there a DOS OS/2 moment? Is there an opportunity to flip the script? This is- >> I think that's a perfect example of why we need to support the existing interfaces, Operating Systems, real operating systems like Linux, understands how to present data, it's called a file system, block devices, things that that plumb in there. And by, you know, going to a REST interface and S3 and telling people they have to rewrite their applications, you can't even consume your application binaries that way, the OS doesn't know how to pull that sort of thing. So we, to get to cloud, to get to the ability to host massive numbers of tenants within a centralized infrastructure, you know, we abandoned these lower level interfaces to the OS and we have to go back to that. It's the reason why DOS ultimately won, is it had the momentum of the install base. We're seeing the same thing here. Whatever it is, it has to be a real file system and not a come down file system >> Nelu, what's your reaction, 'cause you're in the theoretical bandwagon. Let's get your reaction. >> No, I think it's a good, I'll give, you made a good analogy between OS/2 and DOS, but I'll go even farther saying, if you think about the evolution operating system didn't stop the evolution of underlying microprocessors, hardware, and so on and so forth. On the contrary, it was a catalyst for that. So because everybody could develop their own hardware, without worrying that the applications on top of operating system are going to modify. The same thing is going to happen with Supercloud. You're going to have the AWSs, you're going to have the Azure and the the GCP continue to evolve in their own way proprietary. But if we create on top of it the right interface >> The open, this is why open is important. >> That's correct, because actually you're going to see sometime ago, everybody was saying, remember venture capitals were saying, "AWS killed the world, nobody's going to come." Now you see what Oracle is doing, and then you're going to see other players. >> It's funny, Amazon's trying to be more like Microsoft. Microsoft's trying to be more like Amazon and Google- Oracle's just trying to say they have cloud. >> That's, that's correct, (group laughs) so, my point is, you're going to see a multiplication of this HyperClouds and cloud technology. So, the system has to be open in order to accommodate what it is and what is going to come. Okay, so it's open. >> So the the legacy- so legacy is an opportunity, not a blocker in your mind. And you see- >> That's correct, I think we should allow them to continue to to to be their own actually. But maybe you're going to find a way to connect with it. >> Amazon's the processor, and they're on the 80 80 80 right? >> That's correct. >> You're saying you love people trying to get put to work. >> That's a good analogy. >> But, performance levels you say good luck, right? >> Well yeah, we have to be able to take traditional applications, high performance applications, those that consume file system and persistent data. Those things have to be able to run anywhere. You need to be able to put, put them onto, you know, more elastic infrastructure. So, we have to actually get cloud to where it lives up to its billing. >> And that's what you're solving for, with Hammerspace, >> That's what we're solving for, making it possible- >> Give me the bumper sticker. >> Solving for how do you have massive quantities of unstructured file data? At the end of the day, all data ultimately is unstructured data. Have that persistent data available, across any data center, within any cloud, within any region on-prem, at the edge. And have not just the same APIs, but have the exact same data sets, and not sucked over a straw remote, but at extreme high performance, local access. So how do you have local access to globally shared distributed data? And that's what we're doing. We are orchestrating data globally across all different forms of storage infrastructure, so you have a consistent access at the highest performance levels, at the lowest level innate built into the OS, how to consume it as (indistinct) >> So are you going into the- all the clouds and natively building in there, or are you off cloud? >> So This is software that can run on cloud instances and provide high performance file within the cloud. It can take file data that's on-prem. Again, it's software, it can run in virtual or on physical servers. And it abstracts the data from the existing storage infrastructure, and makes the data visible and consumable and orchestratable across any of it. >> And what's the elevator pitch for Cloud of Cloud, give that too. >> Well, Cloud of Clouds creates a theoretical model of cloud, and it describes every single object in the cloud. Where is data, execution units, and connectivity, with one single class of very simple object. And I can, I can give you (indistinct) >> And the problem that solves is what? >> The problem that solves is, it creates this mathematical model that is necessary in order to do other interesting things, such as optimization, using sata engines, using automation, applying ML for instance. Or deep learning to automate all this clouds, if you think about in the industrial field, we know how to manage and automate huge plants. Why wouldn't it do the same thing in cloud? It's the same thing you- >> That's what you mean by theoretical model. >> Nelu: That's correct. >> Lay out the architecture, almost the bones of skeleton or something, or, and then- >> That's correct, and then on top of it you can actually build a platform, You can create your services, >> when you say math, you mean you put numbers to it, you kind of index it. >> You quantify this thing and you apply mathematical- It's really about, I can disclose this thing. It's really about describing the cloud as a knowledge graph for every single object in the graph for node, an edge is a vector. And then once you have this model, then you can apply the field theory, and linear algebra to do operation with these vectors. And it's, this creates a very interesting opportunity to let the math do this thing for us. >> Okay, so what happens with hyperscale, or it's like AWS in your model. >> So in, in my model actually, >> Are they happy with this, or they >> I'm very happy with that. >> Will they be happy with you? >> We create an interface to every single HyperCloud. We actually, we don't need to interface with the thousands of APIs, but you know, if we have the 80 20 rule, and we map these APIs into this graph, and then every single operation that is done in this graph is done from the beginning, in an optimized manner and also automation ready. >> That's going to be great. David, I want us to go back to you before we close real quick. You've had a lot of experience, multiple ventures on the front end. You talked to a lot of customers who've been innovating. Where are the classic (indistinct)? Cause you, you used to sell and invent product around the old school enterprises with storage, you know that that trajectory storage is still critical to store the data. Where's the classic enterprise grade mindset right now? Those customers that were buying, that are buying storage, they're in the cloud, they're lifting and shifting. They not yet put the throttle on DevOps. When they look at this Supercloud thing, Are they like a deer in the headlights, or are they like getting it? What's the, what's the classic enterprise look like? >> You're seeing people at different stages of adoption. Some folks are trying to get to the cloud, some folks are trying to repatriate from the cloud, because they've realized it's better to own than to rent when you use a lot of it. And so people are at very different stages of the journey. But the one thing that's constant is that there's always change. And the change here has to do with being able to change the location where you're doing your computing. So being able to support traditional workloads in the cloud, being able to run things at the edge, and being able to rationalize where the data ought to exist, and with a declarative model, intent-based, business objective-based, be able to swipe a mouse and have the data get redistributed and positioned across different vendors, across different clouds, that, we're seeing that as really top of mind right now, because everybody's at some point on this journey, trying to go somewhere, and it involves taking their data with them. (John laughs) >> Guys, great conversation. Thanks so much for coming on, for John, Dave. Stay tuned, we got a great analyst power panel coming right up. More from Palo Alto, Supercloud 2. Be right back. (bouncy music)

Published Date : Jan 18 2023

SUMMARY :

and I'm really pleased to And Dr. Nelu Mihai is the CEO So I'm going to start right off On the other hand, if you look at what's So the argument, the of platform being the monolith, you know, but on the developer cloud, It's the scale thing that gets me- the ability to run anything anywhere. of the heavy lifting of IT. Not have to run your And then you know, it's like web 2.0. It's what Cloud It's what cloud was supposed to be, and you can choose somebody bound to that. Also having in mind the to rewrite your application. That's correct. I mean your point is Yeah, let that thing continue to grow. of the cloud in which you put that. So, the state stuff- because even the application binaries If you think about no software running on Dave: So it's an illusion, okay. (indistinct) you guys talk And actually depending on the application, that no, you have to build the job scheduler, and the thing the equation, thank you. a PhD in operating system. about is an operating system. I think I think it's going to and it's going to be better at that level the way you But in order to be able to and build the same kind of Because I buy the theoretical, the OS doesn't know how to Nelu, what's your reaction, of it the right interface The open, this is "AWS killed the world, to be more like Microsoft. So, the system has to be open So the the legacy- to continue to to to put to work. You need to be able to put, And have not just the same APIs, and makes the data visible and consumable for Cloud of Cloud, give that too. And I can, I can give you (indistinct) It's the same thing you- That's what you mean when you say math, and linear algebra to do Okay, so what happens with hyperscale, the thousands of APIs, but you know, the old school enterprises with storage, and being able to rationalize Stay tuned, we got a

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

JohnPERSON

0.99+

NeluPERSON

0.99+

David FlynnPERSON

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LondonLOCATION

0.99+

John FurrierPERSON

0.99+

LALOCATION

0.99+

Bob MugliaPERSON

0.99+

OS/2TITLE

0.99+

Nir ZukPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HammerspaceORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Bell LabsORGANIZATION

0.99+

Nelu MihaiPERSON

0.99+

DOSTITLE

0.99+

AWSsORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

twiceQUANTITY

0.99+

CiscoORGANIZATION

0.99+

todayDATE

0.99+

CanadaLOCATION

0.99+

bothQUANTITY

0.99+

Palo AltoLOCATION

0.99+

SupercloudORGANIZATION

0.99+

Nelu LaughsPERSON

0.98+

thousandsQUANTITY

0.98+

firstQUANTITY

0.97+

LinuxTITLE

0.97+

HyperCloudTITLE

0.97+

Cloud of CloudTITLE

0.97+

oneQUANTITY

0.96+

Cloud of CloudsORGANIZATION

0.95+

GCPTITLE

0.95+

AzureTITLE

0.94+

three variablesQUANTITY

0.94+

one single classQUANTITY

0.94+

single serverQUANTITY

0.94+

tripletQUANTITY

0.94+

one regionQUANTITY

0.92+

NetAppTITLE

0.92+

DOS OS/2TITLE

0.92+

AzureORGANIZATION

0.92+

earlier todayDATE

0.92+

Cloud of CloudsTITLE

0.91+

Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)

Published Date : Jan 14 2023

SUMMARY :

with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob MugliaPERSON

0.99+

Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

David FlynnPERSON

0.99+

VeronicaPERSON

0.99+

JackPERSON

0.99+

Nelu MihaiPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Thomas HazelPERSON

0.99+

Nick TaylorPERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

Kristen MartinPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Veronica DurginPERSON

0.99+

WalmartORGANIZATION

0.99+

Rob HoPERSON

0.99+

Warner MediaORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Veronika DurginPERSON

0.99+

George GilbertPERSON

0.99+

Ionis PharmaceuticalORGANIZATION

0.99+

George GilbertPERSON

0.99+

Bob MugliaPERSON

0.99+

David FlorePERSON

0.99+

DBT LabsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BobPERSON

0.99+

Palo AltoLOCATION

0.99+

21 sessionsQUANTITY

0.99+

Darren BrambermPERSON

0.99+

33 guestsQUANTITY

0.99+

Nir ZukPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Harveer SinghPERSON

0.99+

Kit ColbertPERSON

0.99+

DatabricksORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

Supercloud 2TITLE

0.99+

SnowflakeORGANIZATION

0.99+

last yearDATE

0.99+

Western UnionORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

200 locationsQUANTITY

0.99+

AugustDATE

0.99+

Keith TownsendPERSON

0.99+

Data MeshORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

next weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

ETRORGANIZATION

0.98+

Eric BradleyPERSON

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

SachsORGANIZATION

0.98+

SAKSORGANIZATION

0.98+

SupercloudEVENT

0.98+

last AugustDATE

0.98+

each weekQUANTITY

0.98+

Anurag Gupta, Shoreline io | AWS re:Invent 2022 - Global Startup Program


 

(gentle music) >> Now welcome back to theCUBE, everyone. I'm John Walls, and once again, we're glad to have you here for AWS re:Invent 22. Our coverage continues here on Thursday, day three, of what has been a jam-packed week of tech and AWS, of course, has been the great host for this. It's now a pleasure to welcome in Anurag Gupta, who is the founder and CEO of Shoreline, joining us here as part of the AWS Global Showcase Startup Program, and Anurag, good to see you, sir. Thanks for joining us. >> Thank you so much. >> Tell us about Shoreline, about what you're up to. >> So we're a DevOps company. We're really focused on repairing issues. If you think about it, there are a ton DevOps companies and we all went to the cloud in order to gain faster innovation and by and large check. Then all of the things involved in getting things into production, artifact generation, testing, configuration management, deployment, also by and large, automated. Now pity the poor SRE who's getting the deluge of stuff on them, every week, every two days, sometimes multiple times a day, and it's complicated, right? Kubernetes, VMs, lots of services, multiple clouds, sometimes, and you know, they need to know a little bit about everything. And you know what, there are a ton of companies that actually help you with what we call Day-2 Ops. It's just that most of them help you with observability, telling you what's gone wrong, or incident management, routing something to someone. But you know, back when I was at AWS, I never got really that excited about one more dashboard to look at or one more like better ticket routing. What used to really excite me was having some issue extinguished forever. And if you think about it, like the first five minutes of an incident are detecting and routing. The next hour, two hours, is some human being going in and fixing it, so that feels like the big opportunity to reduce, so hopefully we can talk a little bit about different ways that one can do that. >> What about Day-2 Ops? Just tell me about how you define that. >> So I basically define it as once the software goes into a production, just making sure things stay up and are healthy and you're resilient and you don't get errors and all of those sorts of things because everything breaks sooner or later, you know, to a greater or lesser degree. >> Especially that SRE you're talking about, right? >> Yeah. >> So let's go back to that scenario. Yeah, you pity the poor soul because they do have to be a little expert in everything. >> Exactly. >> And that's really challenging and we all know that, that's really hard. So how do you go about trying to lighten that burden, then? >> So when you look at the numbers, about somewhere between 40% to even 95% of the alarms that fire, the alerts that fire, are false positives and that's crazy. Why is someone waking up just to deal with? >> It's a lot of wasted time, isn't it? >> A lot of wasted time. And you know, you're also training someone into what I call ClickOps, just to go in and click the button and resolve it and you don't actually know if it was the false positive or it's the rare real positive, and so that's a challenge, right? And so the first thing to do is to figure out where the false positives are. Like, let's say Datadog tells you that CPU is high and alarms. Is that a good thing or a bad thing? It's hard for them to tell, right? But you have to then introspect it into something precise like, oh, CPU is high, but response times are standard and the request rate is high. Okay, that's a good thing. I'm going to ignore this. Or CPU is high, but it kind of resolves itself, so I'm going to not wake anybody up. Or CPU is high and oh, it's the darn JVM starting to garbage collect again, so let me go and take a heap dump and give that to my dev team and then bounce the JVM and you know, without waking anybody up, or CPU is high, I have no idea what's going on. Now it's time to wake somebody up. You know, what you want to use humans for is the ability to think about novel stuff, not to do repetitive stuff, so that's the first step. The second step is, about 40% of what remains is repetitive and straightforward. So like a disk is full, I'd better clean up the garbage on the disk or maybe grow the disk. People shouldn't wake up to deal to grow a disk. And so for that, what you want to do is just have those sorts of things get automated away. One of the nice things about Shoreline is, is that we take the experience in what we build for one company, and if they're willing, provide it to everybody else. Our belief is, a central tenant is, if someone somewhere fixes something, everyone everywhere should gain the benefit because we all sit on the same three clouds, we all sit on the same set of database infrastructure, et cetera. We should all get the same benefits. Why do we have to scar our own backs rather than benefiting from somebody else's scar tissue, so that's the second thing. The third thing is, okay, let's say it's not straightforward, not something I've seen before, then in that case, what often happens is on average like eight people get involved. You know, it initially goes to L1 support or L1 ops and, but they don't necessarily know because, as you say, the environment's complex. And so, you know, they go into Slack and they say, "At here, can somebody help me with this?" And those things take a much longer time, so wouldn't it be better that if your best SRE is able to say, "Hey, check these 20 things and then run these actions." We could convert that into like a Jupyter Notebook where you could say the incident got fired I pre-populated all the diagnostics, and then I tell people very precisely, "If you see this, run this, et cetera." Like a wiki, but actually something you could run right in this product. And then, you know, last piece of the puzzle, the smaller piece, is sometimes new things happen and when something new happens, what you want is sort of the central tech of Shoreline, which is parallel distributed, real-time debugging. And so the ability to do, you know, execute a command across your fleet rather than individual boxes so that you can say something like, "I'm hearing that my credit card app is slow. For everything tagged as being part of my credit card app, please run for everything that's running over 90% CPU, please run a top command." And so, you know, then you can run in the same time on one host as you can on 30,000 and that helps a lot. So that's the core of what we do. People use us for all sorts of things, also preventative maintenance, you know, just the proactive regular things. You know, like your car, you do an oil change, well, you know, you need to rotate your certs, certificates. You need to make sure that, you know, there isn't drift in your configurations, there isn't drift in your software. There's also security elements to it, right? You want to make sure that you aren't getting weird inbound/outbound traffic across to ports you don't expect to be open. You don't want to have these processes running, you know, maybe something's bad. And so that's all the kind of weird anomaly detection that's easy to do if you run things in a distributed parallel way across everything. That's super hard to do if you have to go and Whac-A-Mole across one box after the next. >> Well, which leads to a question just in terms of setting priorities then, which is what you're talking about helping companies establish priorities, this hierarchy of level one warning, level two, level three, level four. Sounds like that should be a basic, right? But you're saying that's not, that's not really happening in the enterprise. >> Well, you know, I would say that if you hadn't automated deployments, you should do that first. If you haven't automated your testing pipeline, shame on you, you should do that like a year ago. But now it's time to help people in production because you've done that other work and people are suffering. You know, the crazy thing about the cloud is, is that companies spend about three times more on the human beings to operate their cloud infrastructure as on the cloud infrastructure itself. I've yet to hear anybody say that their cloud bill is too low, you know, so, you know, there's a clearer savings also available. And you know, back when I was at AWS, obviously I had to keep the lights on too, but you know, I had to do that, but it's kind of a tax on my engineers and I'd really spend, prefer to spend the head count on innovation, on doing things that delight my customers. You never delight your customers by keeping the lights on, you just avoid irritating them by turning 'em off, right? >> So why are companies so fixed in on spending so much time on manually repairing things and not looking for these kinds of little, much more elegant solution and cost-efficient, time-saving, so on so forth. >> Yeah, I think there just hasn't been very much in this space as yet because it's a hard, hard problem to solve. You know, automation's a little bit scary and that's the reality of it and the way you make it less scary is by proving it out, by doing the simple things first, like reducing the alert fatigue, you know, that's easy. You know, providing notebooks to people so that they can click things and do things in a straightforward way. That's pretty easy. The full automation, that's kind of the North Star, that's what we aspire to do. But you know, people get there over time and one of our customers had 700 instances of this particular incident solved for them last week. You imagine how many human beings would've been doing it otherwise, you know? >> Right. >> That's just one thing, you know? >> How many did it take the build a pyramid? How many decades did that take, right? You had an announcement this week. I don't think we've talked about that. >> No, yeah, so we just announced Incident Insights, which is a free product that lets people plug into initially PagerDuty and pretty soon the Opsgenie ServiceNow, et cetera. And what you can do is, is you give us an API key read-only and we will suck your PagerDuty data out. We apply some lightweight ML unsupervised learning, and in a couple of minutes, we categorize all of your incidents so that you can understand which are the ones that happen most often and are getting resolved really quickly. That's ClickOps, right? Those alarms shouldn't fire. Which are the ones that involve a lot of people? Those are good candidates to build a notebook. Which are the ones that happen again and again and again? Those are good candidates for automation. And so, I think one of the challenges people have is, is that they don't actually know what their teams are doing and so this is intended to provide them that visibility. One of our very first customers was doing the beta test for us on it. He used to tell us he had about 100 tickets, incidents a week. You know, he brought this tool in and he had 2,100 last week and was all, you know, like these false alarms, so while he's giving us- >> That was eye opening for him to see that, sure. >> And why he's, you know, looking at it, you know, he's just like filing Jiras to say, "Oh, change this threshold, cancel this alarm forever." You know, all of that kind of stuff. Before you get to do the fancy work, you got to clean your room before you get to do anything else, right? >> Right, right, dinner before dessert, basically. >> There you go. >> Hey, thanks for the insights on this and again the name of the new product, by the way, is... >> Incident Insights. >> Incident Insights. >> Totally free. >> Free. >> Yeah, it takes a couple of minutes to set up. Go to the website, Shoreline.io/insight and you can be up and running in a couple of minutes. >> Outstanding, again, the company is Shoreline. This is Anurag Gupta, and thank you for being with us. We appreciate it. >> Appreciate it, thank you. >> Glad to have to here on theCUBE. Back with more from AWA re:Invent 22. You're watching theCUBE, the leader in high-tech coverage. (gentle music)

Published Date : Dec 1 2022

SUMMARY :

of the AWS Global Showcase about what you're up to. But you know, back when I was at AWS, Just tell me about how you define that. and you don't get errors Yeah, you pity the poor soul So how do you go about trying So when you look at the numbers, And so the ability to do, you know, in the enterprise. And you know, back when I was at AWS, and not looking for these kinds of little, and the way you make it less the build a pyramid? and was all, you know, for him to see that, sure. And why he's, you know, before dessert, basically. and again the name of the new and you can be up and running thank you for being with us. Glad to have to here on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John WallsPERSON

0.99+

ShorelineORGANIZATION

0.99+

Anurag GuptaPERSON

0.99+

ThursdayDATE

0.99+

2,100QUANTITY

0.99+

AWSORGANIZATION

0.99+

700 instancesQUANTITY

0.99+

AnuragPERSON

0.99+

20 thingsQUANTITY

0.99+

last weekDATE

0.99+

first stepQUANTITY

0.99+

JirasPERSON

0.99+

second thingQUANTITY

0.99+

30,000QUANTITY

0.99+

two hoursQUANTITY

0.99+

eight peopleQUANTITY

0.99+

second stepQUANTITY

0.99+

95%QUANTITY

0.99+

40%QUANTITY

0.99+

third thingQUANTITY

0.99+

one boxQUANTITY

0.99+

about 100 ticketsQUANTITY

0.98+

first five minutesQUANTITY

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

one thingQUANTITY

0.97+

this weekDATE

0.97+

one companyQUANTITY

0.97+

a year agoDATE

0.96+

first thingQUANTITY

0.96+

firstQUANTITY

0.96+

Shoreline.io/insightOTHER

0.96+

SREORGANIZATION

0.95+

about three timesQUANTITY

0.95+

three cloudsQUANTITY

0.95+

JupyterORGANIZATION

0.94+

DatadogORGANIZATION

0.94+

over 90% CPUQUANTITY

0.93+

one hostQUANTITY

0.93+

Global Showcase Startup ProgramEVENT

0.92+

about 40%QUANTITY

0.91+

level fourQUANTITY

0.91+

a weekQUANTITY

0.9+

first customersQUANTITY

0.9+

one moreQUANTITY

0.89+

every two daysQUANTITY

0.86+

level threeQUANTITY

0.86+

level oneQUANTITY

0.85+

DayQUANTITY

0.85+

PagerDutyORGANIZATION

0.84+

level twoQUANTITY

0.81+

re:Invent 2022 - Global Startup ProgramTITLE

0.8+

Shoreline ioORGANIZATION

0.78+

IncidentORGANIZATION

0.73+

ClickOpsORGANIZATION

0.71+

DayTITLE

0.7+

times a dayQUANTITY

0.69+

theCUBEORGANIZATION

0.67+

next hourDATE

0.66+

2TITLE

0.65+

theCUBETITLE

0.63+

KubernetesTITLE

0.62+

day threeQUANTITY

0.62+

everyQUANTITY

0.6+

ton of companiesQUANTITY

0.6+

Invent 22TITLE

0.59+

StarLOCATION

0.59+

OpsgenieORGANIZATION

0.57+

AWAORGANIZATION

0.57+

InventEVENT

0.53+

SlackTITLE

0.52+

PagerDutyTITLE

0.48+

22TITLE

0.46+

2QUANTITY

0.43+

L1ORGANIZATION

0.33+

ServiceNowCOMMERCIAL_ITEM

0.32+

reEVENT

0.27+

Sean Knapp, Ascend io | AWS re:Invent 2022 - Global Startup Program


 

>>And welcome back to the Cube everyone. I'm John Walls to continue our coverage here of AWS Reinvent 22. We're part of the AWS Startup Showcase is the global startup program that AWS so proudly sponsors and with us to talk about what they're doing now in the AWS space. Shaun Knapps, the CEO of AS Send IO and Sean, good to have here with us. We appreciate >>It. Thanks for having me, >>John. Yeah, thanks for the time. First off, gotta show the t-shirt. You caught my attention. Big data is a cluster. I don't think you get a lot of argument from some folks, right? But it's your job to make some sense of it, is it not? Yeah. Tell us about a Send io. >>Sure. As Send IO is a data automation platform. What we do is connect a lot of the, the disparate parts of what data teams do when they create ETL and E o T data pipelines. And we use advanced levels of automation to make it easier and faster for them to build these complex systems and have their world be a little bit less of a, a cluster. >>All right. So let's get into automation a little bit then again, I, your definition of automation and how you're applying it to your business case. >>Absolutely. You know, what we see oftentimes is as spaces mature and evolve, the number of repetitive and repeatable tasks that actually become far less differentiating, but far more taxable if you will, right to the business, start to accumulate as those common patterns emerge. And, and, you know, as we see standardization around tech stacks, like on Amazon and on Snowflake and on data bricks, and as you see those patterns really start to, to formalize and standardize, it opens up the door to basically not have your team have to do all those things anymore and write code or perform the same actions that they used to always have to, and you can lean more on technology to properly automate and remove the, the monotony of those tasks and give your teams greater leverage. >>All right. So, so let's talk about at least maybe your, the journey, say in the past 18 months in terms of automation and, and what have you seen from a trend perspective and how are you trying to address that in order to, to meet that need? >>Yeah, I think the last 18 months have become, you know, really exciting as we've seen both that, you know, a very exciting boom and bust cycle that are driving a lot of other macro behaviors. You know, what we've seen over the last 18 months is far greater adoption of the, the standard, what we call the data planes, the, the architectures around snowflake and data bricks and, and Amazon. And what that's created as a result is the emergence of what I would call is the next problem. You know, as you start to solve that category of how >>You, that's it always works too, isn't >>It? Yeah, exactly. Always >>Works that >>This is the wonderful thing about technology is the job security. There's always the next problem to go solve. And that's what we see is, you know, as we we go into cloud, we get that infinite scale, infinite capacity, capacity, infinite flexibility. And you know, with these modern now data platforms, we get that infinite ability to store and process data incredibly quickly with incredible ease. And so what, what do most organizations do? You take a ton of new bodies, like all the people who wanted to do those like really cool things with data you're like, okay, now you can. And so you start throwing a lot more use cases, you start creating a lot more data products, you start doing a lot more things with data. And this is really where that third category starts to emerge, which is you get this data mess, not mesh, but the data mess. >>You get a cluster cluster, you get a cluster exactly where the complexity skyrockets. And as a result that that rapid innovation that, that you are all looking for and, and promised just comes to a screeching halt as you're just, just like trying to swim through molasses. And as a result, this is where that, that new awareness around automation starts really heightened. You know, we, we did a really interesting survey at the start of this year, did it as a blind survey, independent third party surveyed, 500 chief data officers, data scientists, data architects, and asked them a plethora of questions. But one of the questions we asked them was, do you currently or do you intend on investing in data automation to increase your team's productivity? And what was shocking, and I was very surprised by this, okay, what was shocking was only three and a half percent said they do today. Which is really interesting because it really hones in on this notion of automation is beyond what a lot of a think of, you know, tooling and enhancements today, only three and a half percent today had it, but 88.5% said they intend on making data automation investments in the next 12 months. And that stark contrast of how many people have a thing and how many people want that benefit of automation, right? I think it is incredibly critical as we look to 2023 and beyond. >>I mean, this seems like a no-brainer, does it not? I mean, know it is your business, of course you agree with me, but, but of course, of course what brilliant statement. But it is, it seems like, you know, the more you're, you're able to automate certain processes and then free up your resources and your dollars to be spent elsewhere and your, and your human capital, you know, to be invested elsewhere. That just seems to be a layup. I'm really, I'm very surprised by that three and a half percent figure >>I was too. I actually was expecting it to be higher. I was expecting five to 10%. Yeah. As there's other tools in the, the marketplace around ETL tools or orchestration tools that, that some would argue fit in the automation category. And I think the, what, what the market is telling us based on, on that research is that those themselves are, don't qualify as automation. That, that the market has a, a larger vision for automation. Something that is more metadata driven, more AI back, that takes us a greater leap and of leverage for the teams than than what the, the existing capabilities in the industry today can >>Afford. Okay. So if you got this big leap that you can make, but, but, but maybe, you know, should sites be set a little lower, are you, are you in danger of creating too much of an expectation or too much of a false hope? Because you know, I mean sometimes incremental increases are okay. I >>Agree. I I I think the, you know, I think you wanna do a little bit of both. I think you, you want to have a plan for, for reaching for the stars and you gotta be really pragmatic as well. Even inside of a a suni, we actually have a core value, which is build for 10 x plan for a hundred x and so know where you're going, right? But, but solve the problems that are right in front of you today as, as you get to that next scale. And I think the, the really important part for a lot of companies is how do you think about what that trajectory is and be really smart around where you choose to invest as you, one of the, the scenes that we have is last year's innovation is next year's anchor around your neck. And that's because we, we were in this very fortunately, so this really exciting, rapidly moving innovative space, but the thing that was your advantage not too long ago is everybody can move so quickly now becomes commonplace and a year or two later, if you don't jump on whatever that next innovation is that the industry start to standardize on, you're now on hook paying massive debt and, and paying, you know, you thought you had, you know, home mortgage debt and now you're paying the worst of credit card debt trying to pay that down and maintain your velocity. >>It's >>A whole different kind of fomo, right? I'm fair, miss, I'm gonna miss out. What am I missing out on? What the next big thing exactly been missing out >>On that? And so we encourage a lot of folks, you know, as you think about this as it pertains to automation too, is you solve for some of the problems right in front of you, but really make sure that you're, you're designing the right approach that as you stack on, you know, five times, 10 times as many people building data products and, and you, you're, you're your volume and library of, of data weaving throughout your, your business, make sure you're making those right investments. And that's one of the reasons why we do think automation is so important and, and really this, this next generation of automation, which is a, a metadata and AI back to level of automation that can just achieve and accomplish so much more than, than sort of traditional norms. >>Yeah. On that, like, as far as Dex Gen goes, what do you think is gonna be possible that cloud sets the stage for that maybe, you know, not too long ago seem really outta reach, like, like what's gonna give somebody to work on that 88% in there that's gonna make their spin come your way? >>Ah, good question. So I, I think there's a couple fold. I, you know, I think the, right now we see two things happening. You know, we see large movements going to the, the, the dominant data platforms today. And, and you know, frankly, one of the, the biggest challenges we see people having today is just how do you get data in which is insanity to me because that's not even the value extraction, that is the cost center piece of it. Just get data in so you can start to do something with it. And so I think that becomes a, a huge hurdle, but the access to new technologies, the ability to start to unify more of your data and, and in rapid fashion, I think is, is really important. I think as we start to, to invest more in this metadata backed layer that can connect that those notions of how do you ingest your data, how do you transform it, how do you orchestrate it, how do you observe it? One of the really compelling parts of this is metadata does become the new big data itself. And so to do these really advanced things to give these data teams greater levels of automation and leverage, we actually need cloud capabilities to process large volumes of not the data, but the metadata around the data itself to deliver on these really powerful capabilities. And so I think that's why the, this new world that we see of the, the developer platforms for modern data cloud applications actually benefit from being a cloud native application themselves. >>So before you take off, talk about the AWS relationship part of the startup showcase part of the growth program. And we've talked a lot about the cloud, what it's doing for your business, but let's just talk about again, how integral they have been to your success and, and likewise what you're thinking maybe you bring to their table too. Yeah, >>Well we bring a lot to the table. >>Absolutely. I had no doubt about that. >>I mean, honestly, it, working with with AWS has been truly fantastic. Yep. You know, I think, you know, as a, a startup that's really growing and expanding your footprint, having access to the resources in AWS to drive adoption, drive best practices, drive awareness is incredibly impactful. I think, you know, conversely too, the, the value that Ascend provides to the, the AWS ecosystem is tremendous leverage on onboarding and driving faster use cases, faster adoption of all the really great cool, exciting technologies that we get to hear about by bringing more advanced layers of automation to the existing product stack, we can make it easier for more people to build more powerful things faster and safely. Which I think is what most businesses at reinvent really are looking for. >>It's win-win, win-win. Yeah. That's for sure. Sean, thanks for the time. Thank you John. Good job on the t-shirt and keep up the good work. Thank you very much. I appreciate that. Sean Na, joining us here on the AWS startup program, part of their of the Startup Showcase. We are of course on the Cube, I'm John Walls. We're at the Venetian in Las Vegas, and the cube, as you well know, is the leader in high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

We're part of the AWS Startup Showcase is the global startup program I don't think you get a lot of argument from some folks, And we use advanced levels of automation to make it easier and faster for them to build automation and how you're applying it to your business case. And, and, you know, as we see standardization around tech stacks, the journey, say in the past 18 months in terms of automation and, and what have you seen from a Yeah, I think the last 18 months have become, you know, really exciting as we've Yeah, exactly. And that's what we see is, you know, as we we go into cloud, But one of the questions we asked them was, do you currently or you know, the more you're, you're able to automate certain processes and then free up your resources and your and of leverage for the teams than than what the, the existing capabilities Because you know, I mean sometimes incremental increases But, but solve the problems that are right in front of you today as, as you get to that next scale. What the next big thing exactly been And so we encourage a lot of folks, you know, as you think about this as it pertains to automation too, cloud sets the stage for that maybe, you know, not too long ago seem And, and you know, frankly, one of the, the biggest challenges we see people having today is just how do So before you take off, talk about the AWS relationship part of the startup showcase I had no doubt about that. You know, I think, you know, as a, a startup that's really growing and expanding your footprint, We're at the Venetian in Las Vegas, and the cube, as you well know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
fiveQUANTITY

0.99+

Shaun KnappsPERSON

0.99+

John WallsPERSON

0.99+

AWSORGANIZATION

0.99+

Sean KnappPERSON

0.99+

JohnPERSON

0.99+

SeanPERSON

0.99+

10 timesQUANTITY

0.99+

Sean NaPERSON

0.99+

88.5%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

five timesQUANTITY

0.99+

next yearDATE

0.99+

Las VegasLOCATION

0.99+

todayDATE

0.99+

2023DATE

0.99+

last yearDATE

0.99+

88%QUANTITY

0.99+

500 chief data officersQUANTITY

0.99+

oneQUANTITY

0.99+

10%QUANTITY

0.99+

OneQUANTITY

0.99+

third categoryQUANTITY

0.99+

bothQUANTITY

0.98+

VenetianLOCATION

0.97+

three and a half percentQUANTITY

0.97+

FirstQUANTITY

0.96+

this yearDATE

0.96+

a yearDATE

0.96+

AscendORGANIZATION

0.96+

two thingsQUANTITY

0.95+

Send IOTITLE

0.9+

last 18 monthsDATE

0.85+

10 xQUANTITY

0.83+

next 12 monthsDATE

0.83+

hundredQUANTITY

0.8+

22TITLE

0.78+

one of the questionsQUANTITY

0.77+

AS Send IOORGANIZATION

0.76+

past 18 monthsDATE

0.73+

two laterDATE

0.72+

SnowflakeORGANIZATION

0.71+

threeQUANTITY

0.71+

Startup ShowcaseEVENT

0.7+

half percentQUANTITY

0.67+

Send ioTITLE

0.65+

couple foldQUANTITY

0.62+

2022 - Global Startup ProgramTITLE

0.59+

Dex GenCOMMERCIAL_ITEM

0.44+

ReinventEVENT

0.38+

CubePERSON

0.35+

Anais Dotis Georgiou, InfluxData | Evolving InfluxDB into the Smart Data Platform


 

>>Okay, we're back. I'm Dave Valante with The Cube and you're watching Evolving Influx DB into the smart data platform made possible by influx data. Anna East Otis Georgio is here. She's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into realtime analytics. Anna is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IO X is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory, of course for speed. It's a kilo store, so it gives you compression efficiency, it's gonna give you faster query speeds, it gonna use store files and object storages. So you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOCs is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so a lot there. Now we talked to Brian about how you're using Rust and and which is not a new programming language and of course we had some drama around Russ during the pandemic with the Mozilla layoffs, but the formation of the Russ Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Rust was chosen because of his exceptional performance and rebi reliability. So while rust is synt tactically similar to c c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on card for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ, Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fixed race conditions to protect against buffering overflows and to ensure thread safe ay caching structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learned about the the new engine and the, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you're really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data and so much of the efficiency and performance of IOCs comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of illustrate why calmer data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then neighbor each other and when they neighbor each other in the storage format. This provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the min and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one times stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, calmer data fit framework. So that's where a lot of the advantages come >>From. Okay. So you've basically described like a traditional database, a row approach, but I've seen like a lot of traditional databases say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native it, is it not as effective as the, is the form not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. >>Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in rust, but what does it bring to to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps influx DB IOx is that okay, it's great if you can write unlimited amount of cardinality into influx cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PDA's data frames as well and all of the machine learning tools associated with pandas. >>Okay. You're also leveraging par K in the platform course. We heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par K and why is it important? >>Sure. So Par K is the calm oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and pandas so it supports a broader ecosystem. Parque files also take very little disc disc space and they're faster to scan because again they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and these, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call it the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOCs and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and I just wanna learn more, then I would encourage you to go to the monthly tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the influx D DB underscore IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about IOCs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how influx TB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and you guys super responsive, so really appreciate that. All right, thank you so much and East for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yokum. He's the director of engineering for Influx Data and we're gonna talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't wanna miss this.

Published Date : Nov 8 2022

SUMMARY :

to increase the granularity of time series analysis analysis and bring the world of data Hi, thank you so much. So you got very cost effective approach. it aims to have no limits on cardinality and also allow you to write any kind of event data that So lots of platforms, lots of adoption with rust, but why rust as an all the fine grain control, you need to take advantage of even to even today you do a lot of garbage collection in these, in these systems and And so you can picture this table where we have like two rows with the two temperature values for order to answer that question and you have those immediately available to you. to pluck out that one temperature value that you want at that one times stamp and do that for every about is really, you know, kind of native it, is it not as effective as the, Yeah, it's, it's not as effective because you have more expensive compression and because So let's talk about Arrow data fusion. It also has a PANDAS API so that you could take advantage of What are you doing with So it's important What's the value that you're bringing to the community? here is that the more you contribute and build those up, then the kind of summarize, you know, where what, what the big takeaways are from your perspective. So if there's a particular technology or stack that you wanna dive deeper into and want and you guys super responsive, so really appreciate that. I really appreciate it. Influx Data and we're gonna talk about how you update a SaaS engine while

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YokumPERSON

0.99+

Jeff FrickPERSON

0.99+

BrianPERSON

0.99+

AnnaPERSON

0.99+

James BellengerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

three monthsQUANTITY

0.99+

16 timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

PythonTITLE

0.99+

mobile.twitter.comOTHER

0.99+

Influx DataORGANIZATION

0.99+

iOSTITLE

0.99+

TwitterORGANIZATION

0.99+

30,000 feetQUANTITY

0.99+

Russ FoundationORGANIZATION

0.99+

ScalaTITLE

0.99+

Twitter LiteTITLE

0.99+

two rowsQUANTITY

0.99+

200 megabyteQUANTITY

0.99+

NodeTITLE

0.99+

Three months agoDATE

0.99+

one applicationQUANTITY

0.99+

both placesQUANTITY

0.99+

each rowQUANTITY

0.99+

Par KTITLE

0.99+

Anais Dotis GeorgiouPERSON

0.99+

one languageQUANTITY

0.98+

first oneQUANTITY

0.98+

15 engineersQUANTITY

0.98+

Anna East Otis GeorgioPERSON

0.98+

bothQUANTITY

0.98+

one secondQUANTITY

0.98+

25 engineersQUANTITY

0.98+

About 800 peopleQUANTITY

0.98+

sqlTITLE

0.98+

Node Summit 2017EVENT

0.98+

two temperature valuesQUANTITY

0.98+

one timesQUANTITY

0.98+

c plus plusTITLE

0.97+

RustTITLE

0.96+

SQLTITLE

0.96+

todayDATE

0.96+

InfluxORGANIZATION

0.95+

under 600 kilobytesQUANTITY

0.95+

firstQUANTITY

0.95+

c plus plusTITLE

0.95+

ApacheORGANIZATION

0.95+

par KTITLE

0.94+

ReactTITLE

0.94+

RussORGANIZATION

0.94+

About three months agoDATE

0.93+

8:30 AM Pacific timeDATE

0.93+

twitter.comOTHER

0.93+

last decadeDATE

0.93+

NodeORGANIZATION

0.92+

HadoopTITLE

0.9+

InfluxDataORGANIZATION

0.89+

c c plus plusTITLE

0.89+

CubeORGANIZATION

0.89+

each columnQUANTITY

0.88+

InfluxDBTITLE

0.86+

Influx DBTITLE

0.86+

MozillaORGANIZATION

0.86+

DB IOxTITLE

0.85+

Garima Kapoor, MinIO | KubeCon + CloudNativeCon NA 2022


 

>>How y'all doing? My name's Savannah Peterson, coming to you from Detroit, Michigan, where the cube is excited to be at Cube Con. Our guest this afternoon is a wonderfully brilliant woman who's been leading in the space for over eight years. Please welcome Gar Kapur. Gar, thanks for being with us. >>Well, thank you for having me to, It's a pleasure. Good >>To see you. So, update what's going on here? Co saw you at VMware Explorer. Yes. Welcome back to the Cube. Yes. What's, what's going on for you guys here? What's the message? What's the story >>Soupcon like I always say, it's our event, it's our audience. So, you know, Minayo, I dunno if you've been keeping track, Mani ha did reach like a billion docker downloads recently. So >>Congratulations. >>This is your tribe right here. Yes, >>It is. It is. Our >>Tribe's native infrastructure. Come on. Yes. >>You know, this audience understands us. We understand them. You know, you were asking when did we start the company? So we started in 2014, and if you see, Kubernetes was born in 2015 in all sorts of ways. So we kind of literally grew up together along with the Kubernetes journey. So all the decisions that we took were just, you know, making sure that we addressed the Kubernetes and the cloud native audiences, the first class citizens when it comes to storage. So I think that has been very instrumental in leading us up to the point where we have reached a billion docker downloads and we are the most loved object storage out >>There. So, So do you like your younger brother Kubernetes? Or not? Is this is It's a family that gets along. >>It does get along. I think in, in Kubernetes space, what we are seeing from customer standpoint as well, right? They're warming up to Kubernetes and you know, they are using Kubernetes as a framework to deploy anything at scale. And especially when you're, you know, offering storage as a service to your, whether it is for your internal audience or to the external audience, Kubernetes becomes extremely instrumental because it makes Multitenancy extremely easy. It makes, you know, access control points extremely easy for different user sets and so on. Yeah. So Kubernetes is definitely the way to go. I think enterprises need to just have little bit more skill set when it comes to Kubernetes overall, because I think there are still little bit areas in which they need to invest in, but I think this is the right direction, This is the right way. If you, if you want multi-tenant, you need Kubernetes for compute, you need Kubernetes for storage. So >>You guys hit an interesting spot here with Kubernetes. You have a product that targets builders. Yes. But also it's a service that's consumed. >>Yes. Yes. >>How do you see those two lanes shaping out as the world starts to grow, the ecosystems growing, You've got products for builders and products for people who are developers consuming services. How do you see that shaking out? Is just, is there intersections there? There is. You seem to be hitting that. >>There is. There is definitely an intersection. And I think it's getting merged because a lot of these users are the ones who dictate what kind of stack they want as part of their application ecosystem overall, right? So that is where, when an application, for example, in the big data workloads, right? They tell their IT or their storage department, this is the S3 compatible storage that they want their applications to run on or sit on. So the bridges definitely like becoming very narrow in that way from builders versus the service consumers overall. And I think, you know, at the end of the day, people need to get their job done from application users perspective. They want to just get in and get out. They don't want to deal with the underlying complexity when it comes to storage or any of the framework, right? So I think what we enable is for the builders to make sure they have extremely easy, simple, high performance software service that they can offer it to their customers, which is as three compatible. So now they can take their applications wherever they need to go, whether it is edge, whether it is on-prem, whether it is any of the public cloud, wherever you need to be, go be with it. With >>Mei, I mean, I wanna get your thoughts on a really big trend that's happening now. That's right. In your area of expertise. That is people are realizing that, hey, I don't necessarily need AWS S3 for storage. I gotta do my own storage or build my own. So there's a cost slash value for commodity storage. Yes. When does a company just dive to what to do there? Do they do their own? You see, CloudFlare, you seeing Wasabi, other companies? Yes. Merging. You guys are here. Yeah, yeah. Common services then there's a differentiator in the cloud. What's the, what's this all about? >>Yeah, so there are a couple of things going on in this space, right? So firstly, I think cloud model is the way to go. And what, what we mean by cloud is not public cloud, it's the cloud operating model overall, right? You need to build the applications the correct way so that they can consume cloud native infrastructure correctly. So I think that is what is going on. And secondly, I think cloud is great for your burst workloads. It's all about productivity. It's all about getting your applications to the market as fast as you can. And that is where of course, MIN IO comes into play when you know you can develop your applications natively on something like mania. And when, when you take it to production, it's very easy no matter where you go. And thirdly, I think when it comes to the cost perspective, you know, what we offer to the customers is predictability of the cost and no surprise in the builds when it comes, which is extremely important to like a CFO of a company because everyone knows that cloud is not the cheapest place to run your sustainable workloads. And there is unpredictability element involved because, you know, people leave their buckets on, people leave their compute nodes on it, it happens all the time. So I think if you take that uncertainty out of it and have more predictability around it, I think that is, that is where the true value lies. >>You're really hitting on a theme that we've been hearing a lot on the cube today, which is standardization, predictability. Yes. We, everyone always wants to move fast, but I think we're actually stepping away from that Mark Zuckerberg parity, move fast and break things and let's move fast, but know how much it's gonna cost and also decrease the complexity. Drugs >>Don't things. >>Yeah, yeah, yeah, exactly. And try, you know, minimize the collateral damage when Yeah. I, I love that you're enabling folks like that. How is, I'm curious because I see that your background, you have a PhD in philosophy, so we don't always see philosophy and DevOps and Kubernetes in the same conversation. Yeah. So how does this translate into your leadership within your team and the, And Min i's culture, >>So it's PhD in financial management and financial economics. So that is where my specialization lies. And I think after that I came to Bay Area. So once you're in Bay Area, you cannot escape technology. It is >>To you, >>It is just the way things are. You cannot escape startups, you cannot escape technology overall. So that's how I got introduced to it. And yeah, that it has been a great journey so far. And from the culture standpoint of view, you know, I always tell like if I can learn technology, anyone can learn technology. So what we look for is the right attitude, the right kind of, you know, passion to learn is what is most important in this world if you want to succeed. And that's what I tell everyone who joins the, who joins win I, two months, three months, you'll be up and going. I, I'm not too worried about it. >>But pet pedigree doesn't always play into it because no, the changing technology you could level up. So for sure you get into those and be contributing. >>I think one of the reasons why we have been successful the way we have been successful with storage is because we've not hired storage experts. Because they come with their own legacy and mindset of how to build things. And we are like, and we always came from a point of view, we are not a storage company. We are a data company and we want to be close to the data. So when you come to that mindset, you build a product directly attacking data, not just like, you know, in traditional appliance world and so on, so forth. So I think those things have been very instrumental in terms of getting the right people on board, making sure that they're very aligned with how we do things and you know, the dnf, the company's, >>That's for passion and that's actually counterintuitive, but it's makes sense. Yes. In new markets it doesn't always seem to take the boiler plate. Yes. Skill set or person. No, we're doing journalism, but we don't hire journalists. No, >>I mean you gotta be, It's adventurers. It is. It's curious. >>Exactly. Exactly. Yeah, I, yeah, I think also, you know, for you to disrupt any space, you cannot approach it from how they approach the problem. You need to completely turn the tables upside down as they say, right? You need to disrupt it and have the surprise element. And I think that is what always makes a technology very special. You cannot follow the path that others have followed. You need to come from a different space, different mindset altogether. So that is where it's important that you, like you said, adventurous are the people >>That that is for sure. Talk to us about the company. Are you growing scaling? How do people find out more? >>Oh yeah, for sure. So people can find out more by visiting our website. Min dot i, we are growing. We just closed last year, end of last year we closed our CDC round unicorn valuation and so on, so forth. So >>She says unicorn valuation, so casually, I just wanna point that out, that, that, that, that's funny. Like a true strong female leader. I love that. I >>Love that. Thank you. Yes. So in terms of, you know, in terms of growth and scalability, we are growing the team. We are, you know, onboarding more commercial customers to the platform. So yeah, it's growth all across growth from the community standpoint, growth from commercial number standpoint. So congratulations. Yeah, thank you. >>Yeah, that's very exciting. Grma, thank you so much for being, >>Being with us. Thank you for >>Having me. Always. Thanks for hanging out and to all of you, thank you so much for tuning into the Cube, especially for this exciting edition for all of us here in Detroit, Michigan, where we're coming to you from Cuban. See you back here in a little bit.

Published Date : Oct 26 2022

SUMMARY :

My name's Savannah Peterson, coming to you from Detroit, Well, thank you for having me to, It's a pleasure. What's, what's going on for you guys here? So, you know, This is your tribe right here. It is. Yes. So all the decisions that we took were just, you know, making sure that we addressed the Kubernetes and the cloud Is this is It's a family that gets along. you know, offering storage as a service to your, whether it is for your internal audience or to the external audience, You have a product that targets builders. How do you see those two lanes shaping out as the world starts to grow, the ecosystems growing, And I think, you know, at the end of the day, people need to get their job done You see, CloudFlare, you seeing Wasabi, other companies? I think when it comes to the cost perspective, you know, what we offer to the but know how much it's gonna cost and also decrease the complexity. And try, you know, minimize the collateral damage when Yeah. And I think after that I came to Bay Area. And from the culture standpoint of view, you know, I always tell like if I can learn technology, But pet pedigree doesn't always play into it because no, the changing technology you could level So when you come to that mindset, In new markets it doesn't always seem to take the boiler plate. I mean you gotta be, It's adventurers. for you to disrupt any space, you cannot approach it from how they approach the problem. Are you growing scaling? So people can find out more by visiting our website. I love that. you know, onboarding more commercial customers to the platform. Grma, thank you so much for being, Thank you for in Detroit, Michigan, where we're coming to you from Cuban.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

2014DATE

0.99+

Savannah PetersonPERSON

0.99+

last yearDATE

0.99+

Bay AreaLOCATION

0.99+

Mark ZuckerbergPERSON

0.99+

three monthsQUANTITY

0.99+

two monthsQUANTITY

0.99+

MinayoPERSON

0.99+

Garima KapoorPERSON

0.99+

two lanesQUANTITY

0.99+

Detroit, MichiganLOCATION

0.99+

Gar KapurPERSON

0.99+

KubeConEVENT

0.99+

AWSORGANIZATION

0.99+

GarPERSON

0.99+

CloudNativeConEVENT

0.98+

KubernetesTITLE

0.98+

KubernetesPERSON

0.98+

CubanLOCATION

0.98+

WasabiORGANIZATION

0.98+

Detroit, MichiganLOCATION

0.98+

oneQUANTITY

0.97+

over eight yearsQUANTITY

0.97+

threeQUANTITY

0.97+

MIN IOTITLE

0.97+

todayDATE

0.97+

CDCORGANIZATION

0.94+

firstlyQUANTITY

0.94+

VMware ExplorerORGANIZATION

0.93+

GrmaPERSON

0.9+

end of last yearDATE

0.9+

a billion docker downloadsQUANTITY

0.9+

thirdlyQUANTITY

0.86+

this afternoonDATE

0.86+

S3TITLE

0.85+

Cube Con.EVENT

0.82+

NA 2022EVENT

0.82+

ManiPERSON

0.81+

MinIOORGANIZATION

0.76+

secondlyQUANTITY

0.74+

first classQUANTITY

0.74+

CubeCOMMERCIAL_ITEM

0.65+

billion dockerQUANTITY

0.59+

DevOpsTITLE

0.53+

CloudFlareORGANIZATION

0.53+

SoupconORGANIZATION

0.43+

Bassam Tabbara, Upbound | KubeCon + CloudNativeCon NA 2022


 

>>Hello everyone. My name is Savannah Peterson, coming to you live from the Kim Con Show floor on the cube here in Detroit, Michigan. The energy is pulsing big event for the Cloud Native Foundation, and I'm joined by John Furrier on my left. John. Hello. >>Great, great, great to have you on the cube. Thanks for being our new host. You look great, Great segment coming up. I'm looking forward to this. Savannah, this is a great segment. A cube alumni, an OG in the cloud, native world or cloud aati. I, as I call it, been there, done that. A lot of respect, a lot of doing some really amazing, I call it the super cloud holy grail. But we'll see >>Your favorite word, >>This favorite word, It's a really strong segment. Looking forward to hearing from this guest. >>Yes, I am very excited and I'm gonna let him tee it up a little bit. But our guest and his project were actually mentioned in the opening keynote this morning, which is very, very exciting. Ladies and gentlemen, please welcome Baam Tobar Baam, thanks for being here with >>Us. Thank you guys. So good to be back here on the show and, and this exciting energy around us. So it was super, super awesome to be here. >>Yeah, it feels great. So let's start with the opening keynote. Did you know you were gonna get that shout out? >>No, not at all. I, it was, it was really cool to see, you know, I think Cruz was up there talking about how they were building their own platform for autonomous cars and what's running behind it. And they mentioned all these projects and you know, we were like, Wow, that sounds super familiar. And then, then, and then they said, Okay, yeah, we we're, you know, cross plane. They mentioned cross plane, they mentioned, Upbound mentioned the work that we're doing in this space to help folks effectively run, you know, their own layer on top of cloud computing. >>And then Tom, we've known each other, >>We're gonna do a bingo super cloud. So how many times is this Super cloud? So >>Super Cloud is super services, super apps around us. He enables a lot of great things that Brian Grace had a great podcast this week on super services. So it's super, super exciting, >>Super great time on the queue. Super, >>Super >>Cloud conversation. All seriously. Now we've known each other for a long time. You've been to every cub com, you've been in open source, you've seen the seen where it's been, where it is now. Super exciting that in mainstream conversations we're talking about super cloud extractions and around interoperability. Things that were once like really hard to do back, even back on the opens stack days. Now we're at a primetime spot where the control plane, the data planes are in play as a viable architectural component of all the biggest conversations. Yeah, you're in the middle of it. What's your take on it? Give some perspective of why this is so important. >>I mean, look, the key here is to standardize, right? Get to standardization, right? And, and what we saw, like early days of cloud native, it was mostly around Kubernetes, but it was Kubernetes as a, you know, essentially a container orchestrator, the container of wars, Docker, Mesos, et cetera. And then Kubernetes emerged as a, a, the winner in containers, right? But containers is a workload, one kind of workload. It's, I run containers on it, not everything's containers, right? And the, you know, what we're seeing now is the Kubernetes API is emerging as a way to standardize on literally everything in cloud. Not just containers, but you know, VMs, serverless, Lambda, et cetera, storage databases that all using a common approach, a common API layer, a common way to do access control, a common way to do policy, all built around open source projects and you know, the cloud data of ecosystem that you were seeing around here. And that's exciting cuz we've, for the first time we're arriving at some kind of standardization. >>Every major inflection point has this defacto standard evolution, then it becomes kind of commonplace. Great. I agree with Kubernetes. The question I wanted to ask you is what's the impact to the DevOps community? DevSecOps absolutely dominated the playbook, if you will. Developers we're saying we'll run companies cuz they'll be running the applications. It's not a department anymore. Yes, it is the business. If you believe the digital transformation finds its final conclusion, which it will at some point. So more developers doing more, ask more stuff. >>Look, if you, I'd be hard pressed to find somebody that's has a title of DevOps or SRE that can't at least spell Kubernetes, if not running in production, right? And so from that perspective, I think this is a welcome change. Standardize on something that's already familiar to everyone is actually really powerful. They don't have to go, Okay, we learned Kubernetes, now you guys are taking us down a different path of standardization. Or something else has emerged. It's the same thing. It's like we have what, eight years now of cloud native roughly. And, and people in the DevOps space welcome a change where they are basically standardizing on things that are working right? They're actually working right? And they could be used in more use cases, in more scenarios than they're actually, you know, become versatile. They become, you know, ubiquitous as >>You will take a minute to just explain what you guys are selling and doing. What's the product, what's the traction, why are people using you? What's the big, big mo position value statement you guys think? >>Yeah, so, so, so the, my company's called Upbound and where the, where the folks behind the, the cross plane project and cross plane is effective, takes Kubernetes and extends it to beyond containers and to ev managing everything in cloud, right? So if you think about that, if you love the model where you're like, I, I go to Kubernetes cluster and I tell it to run a bunch of containers and it does it for me and I walk away, you can do that for the rest of the surface area of cloud, including your VMs and your storage and across cloud vendors, hybrid models, All of it works in a consistent standardized way, you know, using crossline, right? And I found >>What do you solve? What do you solve or eliminate? What happens? Why does this work? Are you replacing something? Are you extracting away something? Are you changing >>Something? I think we're layering on top of things that people have, right? So, so you'll see people are organized differently. We see a common pattern now where there's shared services teams or platform teams as you hear within enterprises that are responsible for basically managing infrastructure and offering a self-service experience to developers, right? Those teams are all about standardization. They're all about creating things that help them reduce the toil, manage things in a common way, and then offer self-service abstractions to their, you know, developers and customers. So they don't have to be in the middle of every request. Things can go faster. We're seeing a pattern now where the, these teams are standardizing on the Kubernetes API or standardizing on cross plane and standardizing on things that make their life easier, right? They don't have to replace what they're doing, they just have to layer and use it. And I layer it's probably a, an opening for you that makes it sound >>More complex, I think, than what you're actually trying to do. I mean, you as a company are all about velocity as an ethos, which I think is great. Do you think that standardization is the key in increasing velocity for teams leveraging both cross claim, Kubernetes? Anyone here? >>Look, I mean, everybody's trying to achieve the same thing. Everybody wants to go faster, they want to innovate faster. They don't want tech to be the friction to innovation, right? Right. They want, they wanna go from feature to production in minutes, right? And so, or less to that extent, standardization is a way to achieve that. It's not the only way to achieve that. It's, it's means to achieve that. And if you've standardized, that means that less people are involved. You can automate more, you can st you can centralize. And by doing that, that means you can innovate faster. And if you don't innovate these days, you're in trouble. Yeah. You're outta business. >>Do you think that, so Kubernetes has a bit of a reputation for complexity. You're obviously creating a tool that makes things easier as you apply Kubernetes outside just an orchestration and container environment. Do you, what do you see those advantages being across the spectrum of tools that people are leveraging you >>For? Yeah, I mean, look, if Kubernetes is a platform, right? To build other things on top of, and as a, as a result, it's something that's used to kind of on the back end. Like you would never, you should put something in front of Kubernetes as an application model or consumption interface of portals or Right, Yeah. To give zero teams. But you should still capture all your policies, you know, automation and compliance governance at the Kubernetes layer, right? At the, or with cross plane at that layer as well, right? Right. And so if you follow that model, you can get the best of world both worlds. You standardize, you centralize, you are able to have, you know, common controls and policies and everything else, but you can expose something that's a dev friendly experience on top of as well. So you get the both, both the best of both worlds. >>So the problem with infrastructure is code you're saying is, is that it's not this new layer to go across environments. Does that? No, >>Infrastructure is code works slightly differently. I mean, you, you can, you can write, you know, infrastructures, codes using whatever tooling you like to go across environments. The problem with is that everybody has to learn a specific language or has to work with understanding the constructs. There's the beauty of the Kubernetes based approach and the cross playing best approach is that it puts APIs first, right? It's basically saying, look, kind of like the API meant that it, that led to AWS being created, right? Teams should interact with APIs. They're super strong contracts, right? They're visionable. Yeah. And if you, if you do that and that's kind of the power of this approach, then you can actually reach a really high level of automation and a really high level of >>Innovation. And this also just not to bring in the clouds here, but this might bring up the idea that common services create interoperability, but yet the hyper scale clouds could still differentiate on value very much faster processors if it's silicon to better functions if glam, right? I mean, so there's still, it's not killing innovation. >>It is not, And in fact I, you know, this idea of building something that looks like the lowest common denominator across clouds, we don't actually see that in practice, right? People want, people want to use the best services available to them because they don't have time to go, you know, build portability layers and everything else. But they still, even in that model want to standardize on how to call these services, how to set policy on them, how to set access control, how to actually invoke them. If you can standardize on that, you can still, you get the, you get to use these services and you get the benefits of standardization. >>Well Savannah, we were talking about this, about the Berkeley paper that came out in May, which is kind of a super cloud version they call sky computing. Their argument is that if you try to standardize too much like the old kind of OSI model back in the day, you actually gonna, the work innovations gonna stunt the growth. Do you agree with that? And how do you see, because standardization is not so much a spec and it it, it e f thing. It's not an i e committee. Yeah. It's not like that's kind of standard. It's more of defacto, >>I mean look, we've had standards emerge like, you know, if you look at my S SQL for example, and the Postgres movement, like there are now lots of vendors that offer interfaces that support Postgres even though they're differentiated completely on how it's implement. So you see that if you can stick to open interfaces and use services that offer them that tons of differentiation yet still, you know, some kind of open interface if you will. But there are also differentiated services that are, don't have open interfaces and that's okay too. As long as you're able to kind of find a way to manage them in a consistent way. I think you sh and it makes sense to your business, you should use >>Them. So enterprises like this and just not to get into the business model side real quick, but like how you guys making money? You got the project, you get the cross playing project, that's community. You guys charging what's, what's the business model? >>We we're in the business of helping people adopt and run controlled lanes that do all this management service managed service services and customer support and services, the, the plethora of things that people need where we're >>Keeping the project while >>Keeping the project. >>Correct. So that's >>The key. That's correct. Yeah. You have to balance both >>And you're all over the show. I mean, outside of the keynote mention looking here, you have four events on where can people find you if they're tuning in. We're just at the beginning and there's a lot of looks here. >>Upbound at IO is the place to find Upbound and where I have a lot of talks, you'll see Crossline mention and lots of talks and a number of talks today. We have a happy hour later today we've got a booth set up. So >>I'll be there folks. Just fyi >>And everyone will be there now. Yeah. Quick update. What's up? What's new with the cross plane project? Can you share a little commercial? What's the most important stories going on there? >>So cross plane is growing obviously, and we're seeing a ton of adoption of cross plane, especially actually in large enterprise, which is really exciting cuz they're usually the slow to move and cross plane is so central, so it's now in hundreds and thousands of deployments in woohoo, which is amazing to see. And so the, the project itself is adding a ton of features, reducing friction in terms of adoption, how people ride these control planes and alter them coverage of the space. As you know, controls are only useful when you connect them to things. And the space is like the amount of things you can connect control planes to is increasing on a day to day basis and the maturity is increasing. So it's just super exciting to see all of this right >>Now. How would you categorize the landscape? We were just talking earlier in another segment, we're in Detroit Motor City, you know, it's like teaching someone how to drive a car. Kubernetes pluss, okay, switch the gears like, you know, don't hit the other guy. You know? Now once you learn how to drive, they want a sports car. How do you keep them that progression going? How do you keep people to grow continuously? Where do you see the DevOps and or folks that are doing cross playing that are API hardcore? Cause that's a good IQ that shows 'em that they're advancing. Where's the IQ level of advancement relative to the industry? Is the adoption just like, you know, getting going? Are people advancing? Yeah. Sounds like your customers are heavily down the road on >>Yeah, the way I would describe it is there's a progression happening, right? It, it DevOps was make, initially it was like how do I keep things running right? And it transitioned to how do I automate things so that I don't have to be involved when things are running, running. Right now we're seeing a next turn, which is how do I build what looks like a product that offers shared services or a platform so that people consume it like a product, right? Yeah. And now I'm now transition becomes, well I'm an, I'm a developer on a product in operations building something that looks like a product and thinking about it as a, as a has a user interface. >>Ops of the new devs. >>That's correct. Yeah. There we go. >>Talk about layers. Talk about layers on layers on >>Layers. It's not confusing at all John. >>Well, you know, when they have the architecture architectural list product that's coming. Yeah. But this is what's, I mean the Debs are got so much DevOps in the front and the C I C D pipeline, the ops teams are now retrofitting themselves to be data and security mainly. And that's just guardrails, automation policy, seeing a lot of that kind of network. Like exactly. >>Function. >>Yep. And they're, they're composing, not maybe coding a little bit, but they not, they're not >>Very much. They're in the composition, you know that as a daily thing. They're, they're writing compositions, they're building things, they're putting them together and making them work. >>How new is this in your mind? Cause you, you've watching this progress, you're in the middle of it, you're in the front wave of this. Is it adopting faster now than ever before? I mean, if we talked five years ago, we were kind of saying this might happen, but it wasn't happening today. It kind, it is, >>It's kind of, it's kind of amazing. Like, like everybody's writing these cloud services now. Everybody's authoring things that look like API services that do things on top of the structure. That move is very much, has a ton of momentum right now and it's happening mainstream. It, it's becoming mainstream. >>Speaking of momentum, but some I saw both on your LinkedIn as well as on your badge today that you are hiring. This is your opportunity to shamelessly plug. What are you looking for? What can people expect in terms of your company culture? >>Yeah, so we're obviously hiring, we're hiring both on the go to market side or we're hiring on the product and engineering side. If you want to build, well a new cloud platform, I won't say the word super cloud again, but if you want to, if you're excited about building a cloud platform that literally sits on top of, you know, the other cloud platforms and offers services on top of this, come talk to us. We're building something amazing. >>You're creating a super cloud tool kit. I'll say it >>On that note, think John Farer has now managed to get seven uses of the word super cloud into this broadcast. We sawm tomorrow. Thank you so much for joining us today. It's been a pleasure. I can't wait to see more of you throughout the course of Cuban. My name is Savannah Peterson, everyone, and thank you so much for joining us here on the Cube where we'll be live from Detroit, Michigan all week.

Published Date : Oct 26 2022

SUMMARY :

My name is Savannah Peterson, coming to you live from the Kim Con Show Great, great, great to have you on the cube. Looking forward to hearing from this guest. keynote this morning, which is very, very exciting. Us. Thank you guys. Did you know you And they mentioned all these projects and you know, we were like, Wow, So how many times is this Super cloud? He enables a lot of great things that Brian Super great time on the queue. You've been to every cub com, you've been in open source, you've seen the seen where it's been, where it is now. the cloud data of ecosystem that you were seeing around here. DevSecOps absolutely dominated the playbook, if you will. They become, you know, ubiquitous as You will take a minute to just explain what you guys are selling and doing. and then offer self-service abstractions to their, you know, developers and customers. I mean, you as a company are all And if you don't innovate these days, you're in trouble. being across the spectrum of tools that people are leveraging you that model, you can get the best of world both worlds. So the problem with infrastructure is code you're saying is, is that it's not this new layer to you can write, you know, infrastructures, codes using whatever tooling you like to And this also just not to bring in the clouds here, but this might bring up the idea that available to them because they don't have time to go, you know, build portability layers and the day, you actually gonna, the work innovations gonna stunt the growth. I mean look, we've had standards emerge like, you know, if you look at my S SQL for example, You got the project, you get the cross playing project, that's community. So that's The key. you have four events on where can people find you if they're tuning in. Upbound at IO is the place to find Upbound and where I I'll be there folks. Can you share a little commercial? space is like the amount of things you can connect control planes to is increasing on a day to day basis and Is the adoption just like, you know, getting going? Yeah, the way I would describe it is there's a progression happening, right? That's correct. Talk about layers on layers on It's not confusing at all John. Well, you know, when they have the architecture architectural list product that's coming. they're not They're in the composition, you know that as a daily thing. I mean, if we talked five years ago, we were kind of saying this might Everybody's authoring things that look like API services that do things on top of the structure. What are you looking for? a cloud platform that literally sits on top of, you know, the other cloud platforms You're creating a super cloud tool kit. is Savannah Peterson, everyone, and thank you so much for joining us here on the Cube where we'll be live

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephane MonoboissetPERSON

0.99+

AnthonyPERSON

0.99+

TeresaPERSON

0.99+

AWSORGANIZATION

0.99+

RebeccaPERSON

0.99+

InformaticaORGANIZATION

0.99+

JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

Teresa TungPERSON

0.99+

Keith TownsendPERSON

0.99+

Jeff FrickPERSON

0.99+

Peter BurrisPERSON

0.99+

Rebecca KnightPERSON

0.99+

MarkPERSON

0.99+

SamsungORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

JamiePERSON

0.99+

John FurrierPERSON

0.99+

Jamie SharathPERSON

0.99+

RajeevPERSON

0.99+

AmazonORGANIZATION

0.99+

JeremyPERSON

0.99+

Ramin SayarPERSON

0.99+

HollandLOCATION

0.99+

Abhiman MatlapudiPERSON

0.99+

2014DATE

0.99+

RajeemPERSON

0.99+

Jeff RickPERSON

0.99+

SavannahPERSON

0.99+

Rajeev KrishnanPERSON

0.99+

threeQUANTITY

0.99+

Savannah PetersonPERSON

0.99+

FranceLOCATION

0.99+

Sally JenkinsPERSON

0.99+

GeorgePERSON

0.99+

StephanePERSON

0.99+

John FarerPERSON

0.99+

JamaicaLOCATION

0.99+

EuropeLOCATION

0.99+

AbhimanPERSON

0.99+

YahooORGANIZATION

0.99+

130%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

2018DATE

0.99+

30 daysQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

183%QUANTITY

0.99+

14 millionQUANTITY

0.99+

AsiaLOCATION

0.99+

38%QUANTITY

0.99+

TomPERSON

0.99+

24 millionQUANTITY

0.99+

TheresaPERSON

0.99+

AccentureORGANIZATION

0.99+

AccelizeORGANIZATION

0.99+

32 millionQUANTITY

0.99+

theCUBE Previews Supercomputing 22


 

(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)

Published Date : Oct 25 2022

SUMMARY :

And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Danny HillisPERSON

0.99+

Steve ChenPERSON

0.99+

NECORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Steve WallachPERSON

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

NASAORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Steve FrankPERSON

0.99+

NvidiaORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Seymour CrayPERSON

0.99+

John FurrierPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

UnisysORGANIZATION

0.99+

1997DATE

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

EUORGANIZATION

0.99+

Controlled Data CorporationsORGANIZATION

0.99+

IntelORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Penguin SolutionsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TuesdayDATE

0.99+

siliconangle.comOTHER

0.99+

AMDORGANIZATION

0.99+

21st centuryDATE

0.99+

iPhone 12COMMERCIAL_ITEM

0.99+

10QUANTITY

0.99+

CrayPERSON

0.99+

one terabyteQUANTITY

0.99+

CDCORGANIZATION

0.99+

thecube.netOTHER

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

iPhone 14COMMERCIAL_ITEM

0.99+

john@siliconangle.comOTHER

0.99+

$2 millionQUANTITY

0.99+

November 13thDATE

0.99+

firstQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

TodayDATE

0.99+

more than half a billion dollarsQUANTITY

0.99+

20QUANTITY

0.99+

seven peopleQUANTITY

0.99+

hundredsQUANTITY

0.99+

mid 1960sDATE

0.99+

three daysQUANTITY

0.99+

ConvexORGANIZATION

0.99+

70'sDATE

0.99+

SC22EVENT

0.99+

david.vellante@siliconangle.comOTHER

0.99+

late 80'sDATE

0.98+

80'sDATE

0.98+

ES7000COMMERCIAL_ITEM

0.98+

todayDATE

0.98+

almost $2 millionQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

20 years laterDATE

0.98+

tens of millions of dollarsQUANTITY

0.98+

SundayDATE

0.98+

JapaneseOTHER

0.98+

90'sDATE

0.97+

Anais Dotis Georgiou, InfluxData


 

(upbeat music) >> Okay, we're back. I'm Dave Vellante with The Cube and you're watching Evolving InfluxDB into the smart data platform made possible by influx data. Anais Dotis-Georgiou is here. She's a developer advocate for influx data and we're going to dig into the rationale and value contribution behind several open source technologies that InfluxDB is leveraging to increase the granularity of time series analysis and bring the world of data into realtime analytics. Anais welcome to the program. Thanks for coming on. >> Hi, thank you so much. It's a pleasure to be here. >> Oh, you're very welcome. Okay, so IOx is being touted as this next gen open source core for InfluxDB. And my understanding is that it leverages in memory, of course for speed. It's a kilometer store, so it gives you compression efficiency it's going to give you faster query speeds, it's going to see you store files and object storages so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features but what are the high level value points that people should understand? >> Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metric queries we also want to have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like SQL, Python and maybe even Pandas in the future. >> Okay, so a lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs but the formation of the Rust Foundation really addressed any of those concerns and you got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with Rust but why Rust as an alternative to say C++ for example? >> Sure, that's a great question. So Rust was chosen because of his exceptional performance and reliability. So while Rust is syntactically similar to C++ and it has similar performance it also compiles to a native code like C++ But unlike C++ it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And Rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like C++. So Rust like helps meet that requirement of having no limits on cardinality, for example, because it's we're also using the Rust implementation of Apache Arrow and this control over memory and also Rust's packaging system called Crates IO offers everything that you need out of the box to have features like async and await to fix race conditions to protect against buffering overflows and to ensure thread safe async caching structures as well. So essentially it's just like has all the control all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high cardinality use cases. >> Yeah, and the more I learn about the new engine and the platform IOx et cetera, you see things like the old days not even to even today you do a lot of garbage collection in these systems and there's an inverse, impact relative to performance. So it looks like you're really, the community is modernizing the platform but I want to talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We know that, but please explain why, what is Arrow and what does it bring to InfluxDB? >> Sure. Yeah. So Arrow is a a framework for defining in memory column data. And so much of the efficiency and performance of IOx comes from taking advantage of column data structures. And I will, if you don't mind, take a moment to kind of illustrate why column data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our store. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the store. Well, usually our room temperature is regulated so those values don't change very often. So when you have calm oriented storage essentially you take each row each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you want to define like the min and max value of the temperature in the room across a thousand different points you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of column oriented storage. So if you had a row oriented storage, you'd first have to look at every field like the temperature in the room and the temperature of the store. You'd have to go across every tag value that maybe describes where the room is located or what model the store is. And every timestamp you then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as column and Apache Arrow is in memory column data column data fit framework. So that's where a lot of the advantages come from. >> Okay. So you've basically described like a traditional database a row approach, but I've seen like a lot of traditional databases say, okay, now we've got we can handle Column format versus what you're talking about is really kind of native is it not as effective as the former not as effective because it's largely a bolt on? Can you like elucidate on that front? >> Yeah, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why row oriented storage isn't as efficient as column oriented storage. >> Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in Rust but what does it bring to to the table here? >> Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps InfluxDB IOx is that okay it's great if you can write unlimited amount of cardinality into InfluxDB, but if you don't have a query engine that can successfully query that data then I don't know how much value it is for you. So data fusion helps enable the query process and transformation of that data. It also has a Pandas API so that you could take advantage of Pandas data frames as well and all of the machine learning tools associated with Pandas. >> Okay. You're also leveraging Par-K in the platform course. We heard a lot about Par-K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par-K and why is it important? >> Sure. So Par-K is the column oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and Pandas so it supports a broader ecosystem. Par-K files also take very little disc space and they're faster to scan because again they're column oriented, in particular I think Par-K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the benefits of Par-K. >> Got it. Very popular. So and these, what exactly is Influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >> Sure. So InfluxDB first has contributed a lot of different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing Influx. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >> Yeah. Got it. You got that virtuous cycle going people call it the flywheel. Give us your last thoughts and kind of summarize, what the big takeaways are from your perspective. >> So I think the big takeaway is that, Influx data is doing a lot of really exciting things with InfluxDB IOx and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOx the challenges associated with it and all of the hard work questions and I just want to learn more then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the InfluxDB underscore IOx channel specifically to learn more about how to join those office hours and those monthly tech talks as well as ask any questions they have about IOx what to expect and what you'd like to learn more about. I as a developer advocate, I want to answer your questions. So if there's a particular technology or stack that you want to dive deeper into and want more explanation about how InfluxDB leverages it to build IOx, I will be really excited to produce content on that topic for you. >> Yeah, that's awesome. You guys have a really rich community collaborate with your peers, solve problems and you guys super responsive, so really appreciate that. All right, thank you so much Anais for explaining all this open source stuff to the audience and why it's important to the future of data. >> Thank you. I really appreciate it. >> All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakam. He's the director of engineering for Influx Data and we're going to talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't want to miss this. (upbeat music)

Published Date : Oct 18 2022

SUMMARY :

and bring the world of data It's a pleasure to be here. it's going to give you and some of the most impressive ones to me and you got big guns and dangling pointers are the main classes Yeah, and the more I and the temperature of the store. is it not as effective as the former not and because you can't scan to to the table here? So the way that it helps Par-K in the platform course. and they're faster to scan So and these, what exactly is Influx data and appreciation of the and kind of summarize, of the hard work questions and you guys super responsive, I really appreciate it. and we're going to talk about

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YoakamPERSON

0.99+

BrianPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AnaisPERSON

0.99+

two rowsQUANTITY

0.99+

16 timesQUANTITY

0.99+

Influx DataORGANIZATION

0.99+

each rowQUANTITY

0.99+

PythonTITLE

0.99+

RustTITLE

0.99+

C++TITLE

0.99+

SQLTITLE

0.99+

Anais Dotis GeorgiouPERSON

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

Rust FoundationORGANIZATION

0.99+

30,000 feetQUANTITY

0.99+

first oneQUANTITY

0.99+

MozillaORGANIZATION

0.99+

PandasTITLE

0.98+

InfluxDataORGANIZATION

0.98+

InfluxORGANIZATION

0.98+

IOxTITLE

0.98+

each columnQUANTITY

0.97+

one time stampQUANTITY

0.97+

firstQUANTITY

0.97+

InfluxTITLE

0.96+

Anais Dotis-GeorgiouPERSON

0.95+

Crates IOTITLE

0.94+

IOxORGANIZATION

0.94+

two temperature valuesQUANTITY

0.93+

ApacheORGANIZATION

0.93+

todayDATE

0.93+

8:30 AM Pacific timeDATE

0.92+

WednesdayDATE

0.91+

one temperatureQUANTITY

0.91+

two temperature valuesQUANTITY

0.91+

InfluxDB IOxTITLE

0.9+

influxORGANIZATION

0.89+

last decadeDATE

0.88+

single rowQUANTITY

0.83+

a ton more dataQUANTITY

0.81+

thousandQUANTITY

0.8+

dozens of other featuresQUANTITY

0.8+

a thousand different pointsQUANTITY

0.79+

HadoopTITLE

0.77+

Par-KTITLE

0.76+

pointsQUANTITY

0.75+

eachQUANTITY

0.75+

SlackTITLE

0.74+

Evolving InfluxDBTITLE

0.68+

kilometerQUANTITY

0.67+

ArrowTITLE

0.62+

The CubeORGANIZATION

0.61+

David Flynn Supercloud Audio


 

>> From every ISV to solve the problems. You want there to be tools in place that you can use, either open source tools or whatever it is that help you build it. And slowly over time, that building will become easier and easier. So my question to you was, where do you see you playing? Do you see yourself playing to ISVs as a set of tools, which will make their life a lot easier and provide that work? >> Absolutely. >> If they don't have, so they don't have to do it. Or you're providing this for the end users? Or both? >> So it's a progression. If you go to the ISVs first, you're doomed to starved before you have time for that other option. >> Yeah. >> Right? So it's a question of phase, the phasing of it. And also if you go directly to end users, you can demonstrate the power of it and get the attention of the ISVs. I believe that the ISVs, especially those with the biggest footprints and the most, you know, coveted estates, they have already made massive investments at trying to solve decentralization of their software stack. And I believe that they have used it as a hook to try to move to a software as a service model and rope people into leasing their infrastructure. So if you look at the clouds that have been propped up by Autodesk or by Adobe, or you name the company, they are building proprietary makeshift solutions for decentralizing or hybrid clouding. Or maybe they're not even doing that at all and all they're is saying hey, if you want to get location agnosticness, then what you should just, is just move into our cloud. >> Right. >> And then they try to solve on the background how to decentralize it between different regions so they can have decent offerings in each region. But those who are more advanced have already made larger investments and will be more averse to, you know, throwing that stuff away, all of their makeshift machinery away, and using a platform that gives them high performance parallel, low level file system access, while at the same time having metadata-driven, you know, policy-based, intent-based orchestration to manage the diffusion of data across a decentralized infrastructure. They are not going to be as open because they've made such an investment and they're going to look at how do they monetize it. So what we have found with like the movie studios who are using us already, many of the app they're using, many of those software offerings, the ISVs have their own cloud that offers that software for the cloud. But what we got when I asked about this, 'cause I was dealt specifically into this question because I'm very interested to know how we're going to make that leap from end user upstream into the ISVs where I believe we need to, and they said, look, we cannot use these software ISV-specific SAS clouds for two reasons. Number one is we lose control of the data. We're giving it to them. That's security and other issues. And here you're talking about we're doing work for Disney, we're doing work for Netflix, and they're not going to let us put our data on those software clouds, on those SAS clouds. Secondly, in any reasonable pipeline, the data is shared by many different applications. We need to be agnostic as to the application. 'Cause the inputs to one application, you know, the output for one application provides the input to the next, and it's not necessarily from the same vendor. So they need to have a data platform that lets them, you know, go from one software stack, and you know, to run it on another. Because they might do the rendering with this and yet, they do the editing with that, and you know, et cetera, et cetera. So I think the further you go up the stack in the structured data and dedicated applications for specific functions in specific verticals, the further up the stack you go, the harder it is to justify a SAS offering where you're basically telling the end users you need to park all your data with us and then you can run your application in our cloud and get this. That ultimately is a dead end path versus having the data be open and available to many applications across this supercloud layer. >> Okay, so-- >> Is that making any sense? >> Yes, so if I could just ask a clarifying question. So, if I had to take Snowflake as an example, I think they're doing exactly what you're saying is a dead end, put everything into our proprietary system and then we'll figure out how to distribute it. >> Yeah. >> And and I think if you're familiar with Zhamak Dehghaniis' data mesh concept. Are you? >> A little bit, yeah. >> But in her model, Snowflake, a Snowflake warehouse is just a node on the mesh and that mesh is-- >> That's right. >> Ultimately the supercloud and you're an enabler of that is what I'm hearing. >> That's right. What they're doing up at the structured level and what they're talking about at the structured level we're doing at the underlying, unstructured level, which by the way has implications for how you implement those distributed database things. In other words, implementing a Snowflake on top of Hammerspace would have made building stuff like in the first place easier. It would allow you to easily shift and run the database engine anywhere. You still have to solve how to shard and distribute at the transaction layer above, so I'm not saying we're a substitute for what you need to do at the app layer. By the way, there is another example of that and that's Microsoft Office, right? It's one thing to share that, to have a file share where you can share all the docs. It's something else to have Word and PowerPoint, Excel know how to allow people to be simultaneously editing the same doc. That's always going to happen in the app layer. But not all applications need that level of, you know, in-app decentralization. You know, many of them, many workflows are pipelined, especially the ones that are very data intensive where you're doing drug discovery or you're doing rendering, or you're doing machine learning training. These things are human in the loop with large stages of processing across tens of thousands of cores. And I think that kind of data processing pipeline is what we're focusing on first. Not so much the Microsoft Office or the Snowflake, you know, parking a relational database because that takes a lot of application layer stuff and that's what they're good at. >> Right. >> But I think... >> Go ahead, sorry. >> Later entrance in these markets will find Hammerspace as a way to accelerate their work so they can focus more narrowly on just the stuff that's app-specific, higher level sharing in the app. >> Yes, Snowflake founders-- >> I think it might be worth mentioning also, just keep this confidential guys, but one of our customers is Blue Origin. And one of the things that we have found is kind of the point of what you're talking about with our customers. They're needing to build this and since it's not commercially available or they don't know where to look for it to be commercially available, they're all building themselves. So this layer is needed. And Blue is just one of the examples of quite a few we're now talking to. And like manufacturing, HPC, research where they're out trying to solve this problem with their own scripting tools and things like that. And I just, I don't know if there's anything you want to add, David, but you know, but there's definitely a demand here and customers are trying to figure out how to solve it beyond what Hammerspace is doing. Like the need is so great that they're just putting developers on trying to do it themselves. >> Well, and you know, Snowflake founders, they didn't have a Hammerspace to lean on. But, one of the things that's interesting about supercloud is we feel as though industry clouds will emerge, that as part of company's digital transformations, they will, you know, every company's a software company, they'll begin to build their own clouds and they will be able to use a Hammerspace to do that. >> A super pass layer. >> Yes. It's really, I don't know if David's speaking, I don't want to speak over him, but we can't hear you. May be going through a bad... >> Well, a regional, regional talks that make that possible. And so they're doing these render farms and editing farms, and it's a cloud-specific to the types of workflows in the median entertainment world. Or clouds specifically to workflows in the chip design world or in the drug and bio and life sciences exploration world. There are large organizations that are kind of a blend of end users, like the Broad, which has their own kind of cloud where they're asking collaborators to come in and work with them. So it starts to even blur who's an end user versus an ISV. >> Yes. >> Right? When you start talking about the massive data is the main gravity is to having lots of people participate. >> Yep, and that's where the value is. And that's where the value is. And this is a megatrend that we see. And so it's really important for us to get to the point of what is and what is not a supercloud and, you know, that's where we're trying to evolve. >> Let's talk about this for a second 'cause I want to, I want to challenge you on something and it's something that I got challenged on and it has led me to thinking differently than I did at first, which Molly can attest to. Okay? So, we have been looking for a way to talk about the concept of cloud of utility computing, run anything anywhere that isn't addressed in today's realization of cloud. 'Cause today's cloud is not run anything anywhere, it's quite the opposite. You park your data in AWS and that's where you run stuff. And you pretty much have to. Same with with Azure. They're using data gravity to keep you captive there, just like the old infrastructure guys did. But now it's even worse because it's coupled back with the software to some degree, as well. And you have to use their storage, networking, and compute. It's not, I mean it fell back to the mainframe era. Anyhow, so I love the concept of supercloud. By the way, I was going to suggest that a better term might be hyper cloud since hyper speaks to the multidimensionality of it and the ability to be in a, you know, be in a different dimension, a different plane of existence kind of thing like hyperspace. But super and hyper are somewhat synonyms. I mean, you have hyper cars and you have super cars and blah, blah, blah. I happen to like hyper maybe also because it ties into the whole Hammerspace notion of a hyper-dimensional, you know, reality, having your data centers connected by a wormhole that is Hammerspace. But regardless, what I got challenged on is calling it something different at all versus simply saying, this is what cloud has always meant to be. This is the true cloud, this is real cloud, this is cloud. And I think back to what happened, you'll remember, at Fusion IO we talked about IO memory and we did that because people had a conceptualization of what an SSD was. And an SSD back then was low capacity, low endurance, made to go military, aerospace where things needed to be rugged but was completely useless in the data center. And we needed people to imagine this thing as being able to displace entire SAND, with the kind of capacity density, performance density, endurance. And so we talked IO memory, we could have said enterprise SSD, and that's what the industry now refers to for that concept. What will people be saying five and 10 years from now? Will they simply say, well this is cloud as it was always meant to be where you are truly able to run anything anywhere and have not only the same APIs, but you're same data available with high performance access, all forms of access, block file and object everywhere. So yeah. And I wonder, and this is just me throwing it out there, I wonder if, well, there's trade offs, right? Giving it a new moniker, supercloud, versus simply talking about how cloud is always intended to be and what it was meant to be, you know, the real cloud or true cloud, there are trade-offs. By putting a name on it and branding it, that lets people talk about it and understand they're talking about something different. But it also is that an affront to people who thought that that's what they already had. >> What's different, what's new? Yes, and so we've given a lot of thought to this. >> Right, it's like you. >> And it's because we've been asked that why does the industry need a new term, and we've tried to address some of that. But some of the inside baseball that we haven't shared is, you remember the Web 2.0, back then? >> Yep. >> Web 2.0 was the same thing. And I remember Tim Burners Lee saying, "Why do we need Web 2.0? "This is what the Web was always supposed to be." But the truth is-- >> I know, that was another perfect-- >> But the truth is it wasn't, number one. Number two, everybody hated the Web 2.0 term. John Furrier was actually in the middle of it all. And then it created this groundswell. So one of the things we wrote about is that supercloud is an evocative term that catalyzes debate and conversation, which is what we like, of course. And maybe that's self-serving. But yeah, HyperCloud, Metacloud, super, meaning, it's funny because super came from Latin supra, above, it was never the superlative. But the superlative was a convenient byproduct that caused a lot of friction and flack, which again, in the media business is like a perfect storm brewing. >> The bad thing to have to, and I think you do need to shake people out of their, the complacency of the limitations that they're used to. And I'll tell you what, the fact that you even have the terms hybrid cloud, multi-cloud, private cloud, edge computing, those are all just referring to the different boundaries that isolate the silo that is the current limited cloud. >> Right. >> So if I heard correctly, what just, in terms of us defining what is and what isn't in supercloud, you would say traditional applications which have to run in a certain place, in a certain cloud can't run anywhere else, would be the stuff that you would not put in as being addressed by supercloud. And over time, you would want to be able to run the data where you want to and in any of those concepts. >> Or even modern apps, right? Or even modern apps that are siloed in SAS within an individual cloud, right? >> So yeah, I guess it's twofold. Number one, if you're going at the high application layers, there's lots of ways that you can give the appearance of anything running anywhere. The ISV, the SAS vendor can engineer stuff to have the ability to serve with low enough latency to different geographies, right? So if you go too high up the stack, it kind of loses its meaning because there's lots of different ways to make due and give the appearance of omni-presence of the service. Okay? As you come down more towards the platform layer, it gets harder and harder to mask the fact that supercloud is something entirely different than just a good regionally-distributed SAS service. So I don't think you, I don't think you can distinguish supercloud if you go too high up the stack because it's just SAS, it's just a good SAS service where the SAS vendor has done the hard work to give you low latency access from different geographic regions. >> Yeah, so this is one of the hardest things, David. >> Common among them. >> Yeah, this is really an important point. This is one of the things I've had the most trouble with is why is this not just SAS? >> So you dilute your message when you go up to the SAS layer. If you were to focus most of this around the super pass layer, the how can you host applications and run them anywhere and not host this, not run a service, not have a service available everywhere. So how can you take any application, even applications that are written, you know, in a traditional legacy data center fashion and be able to run them anywhere and have them have their binaries and their datasets and the runtime environment and the infrastructure to start them and stop them? You know, the jobs, the, what the Kubernetes, the job scheduler? What we're really talking about here, what I think we're really talking about here is building the operating system for a decentralized cloud. What is the operating system, the operating environment for a decentralized cloud? Where you can, and that the main two functions of an operating system or an operating environment are the process scheduler, the thing that's scheduling what is running where and when and so forth, and the file system, right? The thing that's supplying a common view and access to data. So when we talk about this, I think that the strongest argument for supercloud is made when you go down to the platform layer and talk of it, talk about it as an operating environment on which you can run all forms of applications. >> Would you exclude--? >> Not a specific application that's been engineered as a SAS. (audio distortion) >> He'll come back. >> Are you there? >> Yeah, yeah, you just cut out for a minute. >> I lost your last statement when you broke up. >> We heard you, you said that not the specific application. So would you exclude Snowflake from supercloud? >> Frankly, I would. I would. Because, well, and this is kind of hard to do because Snowflake doesn't like to, Frank doesn't like to talk about Snowflake as a SAS service. It has a negative connotation. >> But it is. >> I know, we all know it is. We all know it is and because it is, yes, I would exclude them. >> I think I actually have him on camera. >> There's nothing in common. >> I think I have him on camera or maybe Benoit as saying, "Well, we are a SAS." I think it's Slootman. I think I said to Slootman, "I know you don't like to say you're a SAS." And I think he said, "Well, we are a SAS." >> Because again, if you go to the top of the application stack, there's any number of ways you can give it location agnostic function or you know, regional, local stuff. It's like let's solve the location problem by having me be your one location. How can it be decentralized if you're centralizing on (audio distortion)? >> Well, it's more decentralized than if it's all in one cloud. So let me actually, so the spectrum. So again, in the spirit of what is and what isn't, I think it's safe to say Hammerspace is supercloud. I think there's no debate there, right? Certainly among this crowd. And I think we can all agree that Dell, Dell Storage is not supercloud. Where it gets fuzzy is this Snowflake example or even, how about a, how about a Cohesity that instantiates its stack in different cloud regions in different clouds, and synchronizes, however magic sauce it does that. Is that a supercloud? I mean, so I'm cautious about having too strict of a definition 'cause then only-- >> Fair enough, fair enough. >> But I could use your help and thoughts on that. >> So I think we're talking about two different spectrums here. One is the spectrum of platform to application-specific. As you go up the application stack and it becomes this specific thing. Or you go up to the more and more structured where it's serving a specific application function where it's more of a SAS thing. I think it's harder to call a SAS service a supercloud. And I would argue that the reason there, and what you're lacking in the definition is to talk about it as general purpose. Okay? Now, that said, a data warehouse is general purpose at the structured data level. So you could make the argument for why Snowflake is a supercloud by saying that it is a general purpose platform for doing lots of different things. It's just one at a higher level up at the structured data level. So one spectrum is the high level going from platform to, you know, unstructured data to structured data to very application-specific, right? Like a specific, you know, CAD/CAM mechanical design cloud, like an Autodesk would want to give you their cloud for running, you know, and sharing CAD/CAM designs, doing your CAD/CAM anywhere stuff. Well, the other spectrum is how well does the purported supercloud technology actually live up to allowing you to run anything anywhere with not just the same APIs but with the local presence of data with the exact same runtime environment everywhere, and to be able to correctly manage how to get that runtime environment anywhere. So a Cohesity has some means of running things in different places and some means of coordinating what's where and of serving diff, you know, things in different places. I would argue that it is a very poor approximation of what Hammerspace does in providing the exact same file system with local high performance access everywhere with metadata ability to control where the data is actually instantiated so that you don't have to wait for it to get orchestrated. But even then when you do have to wait for it, it happens automatically and so it's still only a matter of, well, how quick is it? And on the other end of the spectrum is you could look at NetApp with Flexcache and say, "Is that supercloud?" And I would argue, well kind of because it allows you to run things in different places because it's a cache. But you know, it really isn't because it presumes some central silo from which you're cacheing stuff. So, you know, is it or isn't it? Well, it's on a spectrum of exactly how fully is it decoupling a runtime environment from specific locality? And I think a cache doesn't, it stretches a specific silo and makes it have some semblance of similar access in other places. But there's still a very big difference to the central silo, right? You can't turn off that central silo, for example. >> So it comes down to how specific you make the definition. And this is where it gets kind of really interesting. It's like cloud. Does IBM have a cloud? >> Exactly. >> I would say yes. Does it have the kind of quality that you would expect from a hyper-scale cloud? No. Or see if you could say the same thing about-- >> But that's a problem with choosing a name. That's the problem with choosing a name supercloud versus talking about the concept of cloud and how true up you are to that concept. >> For sure. >> Right? Because without getting a name, you don't have to draw, yeah. >> I'd like to explore one particular or bring them together. You made a very interesting observation that from a enterprise point of view, they want to safeguard their store, their data, and they want to make sure that they can have that data running in their own workflows, as well as, as other service providers providing services to them for that data. So, and in in particular, if you go back to, you go back to Snowflake. If Snowflake could provide the ability for you to have your data where you wanted, you were in charge of that, would that make Snowflake a supercloud? >> I'll tell you, in my mind, they would be closer to my conceptualization of supercloud if you can instantiate Snowflake as software on your own infrastructure, and pump your own data to Snowflake that's instantiated on your own infrastructure. The fact that it has to be on their infrastructure or that it's on their, that it's on their account in the cloud, that you're giving them the data and they're, that fundamentally goes against it to me. If they, you know, they would be a pure, a pure plate if they were a software defined thing where you could instantiate Snowflake machinery on the infrastructure of your choice and then put your data into that machinery and get all the benefits of Snowflake. >> So did you see--? >> In other words, if they were not a SAS service, but offered all of the similar benefits of being, you know, if it were a service that you could run on your own infrastructure. >> So did you see what they announced, that--? >> I hope that's making sense. >> It does, did you see what they announced at Dell? They basically announced the ability to take non-native Snowflake data, read it in from an object store on-prem, like a Dell object store. They do the same thing with Pure, read it in, running it in the cloud, and then push it back out. And I was saying to Dell, look, that's fine. Okay, that's interesting. You're taking a materialized view or an extended table, whatever you're doing, wouldn't it be more interesting if you could actually run the query locally with your compute? That would be an extension that would actually get my attention and extend that. >> That is what I'm talking about. That's what I'm talking about. And that's why I'm saying I think Hammerspace is more progressive on that front because with our technology, anybody who can instantiate a service, can make a service. And so I, so MSPs can use Hammerspace as a way to build a super pass layer and host their clients on their infrastructure in a cloud-like fashion. And their clients can have their own private data centers and the MSP or the public clouds, and Hammerspace can be instantiated, get this, by different parties in these different pieces of infrastructure and yet linked together to make a common file system across all of it. >> But this is data mesh. If I were HPE and Dell it's exactly what I'd be doing. I'd be working with Hammerspace to create my own data. I'd work with Databricks, Snowflake, and any other-- >> Data mesh is a good way to put it. Data mesh is a good way to put it. And this is at the lowest level of, you know, the underlying file system that's mountable by the operating system, consumed as a real file system. You can't get lower level than that. That's why this is the foundation for all of the other apps and structured data systems because you need to have a data mesh that can at least mesh the binary blob. >> Okay. >> That hold the binaries and that hold the datasets that those applications are running. >> So David, in the third week of January, we're doing supercloud 2 and I'm trying to convince John Furrier to make it a data slash data mesh edition. I'm slowly getting him to the knothole. I would very much, I mean you're in the Bay Area, I'd very much like you to be one of the headlines. As Zhamak Dehghaniis going to speak, she's the creator of Data Mesh, >> Sure. >> I'd love to have you come into our studio as well, for the live session. If you can't make it, we can pre-record. But you're right there, so I'll get you the dates. >> We'd love to, yeah. No, you can count on it. No, definitely. And you know, we don't typically talk about what we do as Data Mesh. We've been, you know, using global data environment. But, you know, under the covers, that's what the thing is. And so yeah, I think we can frame the discussion like that to line up with other, you know, with the other discussions. >> Yeah, and Data Mesh, of course, is one of those evocative names, but she has come up with some very well defined principles around decentralized data, data as products, self-serve infrastructure, automated governance, and and so forth, which I think your vision plugs right into. And she's brilliant. You'll love meeting her. >> Well, you know, and I think.. Oh, go ahead. Go ahead, Peter. >> Just like to work one other interface which I think is important. How do you see yourself and the open source? You talked about having an operating system. Obviously, Linux is the operating system at one level. How are you imagining that you would interface with cost community as part of this development? >> Well, it's funny you ask 'cause my CTO is the kernel maintainer of the storage networking stack. So how the Linux operating system perceives and consumes networked data at the file system level, the network file system stack is his purview. He owns that, he wrote most of it over the last decade that he's been the maintainer, but he's the gatekeeper of what goes in. And we have leveraged his abilities to enhance Linux to be able to use this decentralized data, in particular with decoupling the control plane driven by metadata from the data access path and the many storage systems on which the data gets accessed. So this factoring, this splitting of control plane from data path, metadata from data, was absolutely necessary to create a data mesh like we're talking about. And to be able to build this supercloud concept. And the highways on which the data runs and the client which knows how to talk to it is all open source. And we have, we've driven the NFS 4.2 spec. The newest NFS spec came from my team. And it was specifically the enhancements needed to be able to build a spanning file system, a data mesh at a file system level. Now that said, our file system itself and our server, our file server, our data orchestration, our data management stuff, that's all closed source, proprietary Hammerspace tech. But the highways on which the mesh connects are actually all open source and the client that knows how to consume it. So we would, honestly, I would welcome competitors using those same highways. They would be at a major disadvantage because we kind of built them, but it would still be very validating and I think only increase the potential adoption rate by more than whatever they might take of the market. So it'd actually be good to split the market with somebody else to come in and share those now super highways for how to mesh data at the file system level, you know, in here. So yeah, hopefully that answered your question. Does that answer the question about how we embrace the open source? >> Right, and there was one other, just that my last one is how do you enable something to run in every environment? And if we take the edge, for example, as being, as an environment which is much very, very compute heavy, but having a lot less capability, how do you do a hold? >> Perfect question. Perfect question. What we do today is a software appliance. We are using a Linux RHEL 8, RHEL 8 equivalent or a CentOS 8, or it's, you know, they're all roughly equivalent. But we have bundled and a software appliance which can be instantiated on bare metal hardware on any type of VM system from VMware to all of the different hypervisors in the Linux world, to even Nutanix and such. So it can run in any virtualized environment and it can run on any cloud instance, server instance in the cloud. And we have it packaged and deployable from the marketplaces within the different clouds. So you can literally spin it up at the click of an API in the cloud on instances in the cloud. So with all of these together, you can basically instantiate a Hammerspace set of machinery that can offer up this file system mesh. like we've been using the terminology we've been using now, anywhere. So it's like being able to take and spin up Snowflake and then just be able to install and run some VMs anywhere you want and boom, now you have a Snowflake service. And by the way, it is so complete that some of our customers, I would argue many aren't even using public clouds at all, they're using this just to run their own data centers in a cloud-like fashion, you know, where they have a data service that can span it all. >> Yeah and to Molly's first point, we would consider that, you know, cloud. Let me put you on the spot. If you had to describe conceptually without a chalkboard what an architectural diagram would look like for supercloud, what would you say? >> I would say it's to have the same runtime environment within every data center and defining that runtime environment as what it takes to schedule the execution of applications, so job scheduling, runtime stuff, and here we're talking Kubernetes, Slurm, other things that do job scheduling. We're talking about having a common way to, you know, instantiate compute resources. So a global compute environment, having a common compute environment where you can instantiate things that need computing. Okay? So that's the first part. And then the second is the data platform where you can have file block and object volumes, and have them available with the same APIs in each of these distributed data centers and have the exact same data omnipresent with the ability to control where the data is from one moment to the next, local, where all the data is instantiate. So my definition would be a common runtime environment that's bifurcate-- >> Oh. (attendees chuckling) We just lost them at the money slide. >> That's part of the magic makes people listen. We keep someone on pin and needles waiting. (attendees chuckling) >> That's good. >> Are you back, David? >> I'm on the edge of my seat. Common runtime environment. It was like... >> And just wait, there's more. >> But see, I'm maybe hyper-focused on the lower level of what it takes to host and run applications. And that's the stuff to schedule what resources they need to run and to get them going and to get them connected through to their persistence, you know, and their data. And to have that data available in all forms and have it be the same data everywhere. On top of that, you could then instantiate applications of different types, including relational databases, and data warehouses and such. And then you could say, now I've got, you know, now I've got these more application-level or structured data-level things. I tend to focus less on that structured data level and the application level and am more focused on what it takes to host any of them generically on that super pass layer. And I'll admit, I'm maybe hyper-focused on the pass layer and I think it's valid to include, you know, higher levels up the stack like the structured data level. But as soon as you go all the way up to like, you know, a very specific SAS service, I don't know that you would call that supercloud. >> Well, and that's the question, is there value? And Marianna Tessel from Intuit said, you know, we looked at it, we did it, and it just, it was actually negative value for us because connecting to all these separate clouds was a real pain in the neck. Didn't bring us any additional-- >> Well that's 'cause they don't have this pass layer underneath it so they can't even shop around, which actually makes it hard to stand up your own SAS service. And ultimately they end up having to build their own infrastructure. Like, you know, I think there's been examples like Netflix moving away from the cloud to their own infrastructure. Basically, if you're going to rent it for more than a few months, it makes sense to build it yourself, if it's at any kind of scale. >> Yeah, for certain components of that cloud. But if the Goldman Sachs came to you, David, and said, "Hey, we want to collaborate and we want to build "out a cloud and essentially build our SAS system "and we want to do that with Hammerspace, "and we want to tap the physical infrastructure "of not only our data centers but all the clouds," then that essentially would be a SAS, would it not? And wouldn't that be a Super SAS or a supercloud? >> Well, you know, what they may be using to build their service is a supercloud, but their service at the end of the day is just a SAS service with global reach. Right? >> Yeah. >> You know, look at, oh shoot. What's the name of the company that does? It has a cloud for doing bookkeeping and accounting. I forget their name, net something. NetSuite. >> NetSuite. NetSuite, yeah, Oracle. >> Yeah. >> Yep. >> Oracle acquired them, right? Is NetSuite a supercloud or is it just a SAS service? You know? I think under the covers you might ask are they using supercloud under the covers so that they can run their SAS service anywhere and be able to shop the venue, get elasticity, get all the benefits of cloud in the, to the benefit of their service that they're offering? But you know, folks who consume the service, they don't care because to them they're just connecting to some endpoint somewhere and they don't have to care. So the further up the stack you go, the more location-agnostic it is inherently anyway. >> And I think it's, paths is really the critical layer. We thought about IAS Plus and we thought about SAS Minus, you know, Heroku and hence, that's why we kind of got caught up and included it. But SAS, I admit, is the hardest one to crack. And so maybe we exclude that as a deployment model. >> That's right, and maybe coming down a level to saying but you can have a structured data supercloud, so you could still include, say, Snowflake. Because what Snowflake is doing is more general purpose. So it's about how general purpose it is. Is it hosting lots of other applications or is it the end application? Right? >> Yeah. >> So I would argue general purpose nature forces you to go further towards platform down-stack. And you really need that general purpose or else there is no real distinguishing. So if you want defensible turf to say supercloud is something different, I think it's important to not try to wrap your arms around SAS in the general sense. >> Yeah, and we've kind of not really gone, leaned hard into SAS, we've just included it as a deployment model, which, given the constraints that you just described for structured data would apply if it's general purpose. So David, super helpful. >> Had it sign. Define the SAS as including the hybrid model hold SAS. >> Yep. >> Okay, so with your permission, I'm going to add you to the list of contributors to the definition. I'm going to add-- >> Absolutely. >> I'm going to add this in. I'll share with Molly. >> Absolutely. >> We'll get on the calendar for the date. >> If Molly can share some specific language that we've been putting in that kind of goes to stuff we've been talking about, so. >> Oh, great. >> I think we can, we can share some written kind of concrete recommendations around this stuff, around the general purpose, nature, the common data thing and yeah. >> Okay. >> Really look forward to it and would be glad to be part of this thing. You said it's in February? >> It's in January, I'll let Molly know. >> Oh, January. >> What the date is. >> Excellent. >> Yeah, third week of January. Third week of January on a Tuesday, whatever that is. So yeah, we would welcome you in. But like I said, if it doesn't work for your schedule, we can prerecord something. But it would be awesome to have you in studio. >> I'm sure with this much notice we'll be able to get something. Let's make sure we have the dates communicated to Molly and she'll get my admin to set it up outside so that we have it. >> I'll get those today to you, Molly. Thank you. >> By the way, I am so, so pleased with being able to work with you guys on this. I think the industry needs it very bad. They need something to break them out of the box of their own mental constraints of what the cloud is versus what it's supposed to be. And obviously, the more we get people to question their reality and what is real, what are we really capable of today that then the more business that we're going to get. So we're excited to lend the hand behind this notion of supercloud and a super pass layer in whatever way we can. >> Awesome. >> Can I ask you whether your platforms include ARM as well as X86? >> So we have not done an ARM port yet. It has been entertained and won't be much of a stretch. >> Yeah, it's just a matter of time. >> Actually, entertained doing it on behalf of NVIDIA, but it will absolutely happen because ARM in the data center I think is a foregone conclusion. Well, it's already there in some cases, but not quite at volume. So definitely will be the case. And I'll tell you where this gets really interesting, discussion for another time, is back to my old friend, the SSD, and having SSDs that have enough brains on them to be part of that fabric. Directly. >> Interesting. Interesting. >> Very interesting. >> Directly attached to ethernet and able to create a data mesh global file system, that's going to be really fascinating. Got to run now. >> All right, hey, thanks you guys. Thanks David, thanks Molly. Great to catch up. Bye-bye. >> Bye >> Talk to you soon.

Published Date : Oct 5 2022

SUMMARY :

So my question to you was, they don't have to do it. to starved before you have I believe that the ISVs, especially those the end users you need to So, if I had to take And and I think Ultimately the supercloud or the Snowflake, you know, more narrowly on just the stuff of the point of what you're talking Well, and you know, Snowflake founders, I don't want to speak over So it starts to even blur who's the main gravity is to having and, you know, that's where to be in a, you know, a lot of thought to this. But some of the inside baseball But the truth is-- So one of the things we wrote the fact that you even have that you would not put in as to give you low latency access the hardest things, David. This is one of the things I've the how can you host applications Not a specific application Yeah, yeah, you just statement when you broke up. So would you exclude is kind of hard to do I know, we all know it is. I think I said to Slootman, of ways you can give it So again, in the spirit But I could use your to allowing you to run anything anywhere So it comes down to how quality that you would expect and how true up you are to that concept. you don't have to draw, yeah. the ability for you and get all the benefits of Snowflake. of being, you know, if it were a service They do the same thing and the MSP or the public clouds, to create my own data. for all of the other apps and that hold the datasets So David, in the third week of January, I'd love to have you come like that to line up with other, you know, Yeah, and Data Mesh, of course, is one Well, you know, and I think.. and the open source? and the client which knows how to talk and then just be able to we would consider that, you know, cloud. and have the exact same data We just lost them at the money slide. That's part of the I'm on the edge of my seat. And that's the stuff to schedule Well, and that's the Like, you know, I think But if the Goldman Sachs Well, you know, what they may be using What's the name of the company that does? NetSuite, yeah, Oracle. So the further up the stack you go, But SAS, I admit, is the to saying but you can have a So if you want defensible that you just described Define the SAS as including permission, I'm going to add you I'm going to add this in. We'll get on the calendar to stuff we've been talking about, so. nature, the common data thing and yeah. to it and would be glad to have you in studio. and she'll get my admin to set it up I'll get those today to you, Molly. And obviously, the more we get people So we have not done an ARM port yet. because ARM in the data center I think is Interesting. that's going to be really fascinating. All right, hey, thanks you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

SlootmanPERSON

0.99+

NetflixORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

MollyPERSON

0.99+

Marianna TesselPERSON

0.99+

DellORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

FrankPERSON

0.99+

DisneyORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

IBMORGANIZATION

0.99+

JanuaryDATE

0.99+

John FurrierPERSON

0.99+

FebruaryDATE

0.99+

PeterPERSON

0.99+

Zhamak DehghaniisPERSON

0.99+

HammerspaceORGANIZATION

0.99+

WordTITLE

0.99+

AWSORGANIZATION

0.99+

RHEL 8TITLE

0.99+

OracleORGANIZATION

0.99+

BenoitPERSON

0.99+

ExcelTITLE

0.99+

secondQUANTITY

0.99+

AutodeskORGANIZATION

0.99+

CentOS 8TITLE

0.99+

David FlynnPERSON

0.99+

oneQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

HPEORGANIZATION

0.99+

PowerPointTITLE

0.99+

first pointQUANTITY

0.99+

bothQUANTITY

0.99+

TuesdayDATE

0.99+

SnowflakeORGANIZATION

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

each regionQUANTITY

0.98+

LinuxTITLE

0.98+

OneQUANTITY

0.98+

IntuitORGANIZATION

0.98+

Tim Burners LeePERSON

0.98+

Zhamak Dehghaniis'PERSON

0.98+

Blue OriginORGANIZATION

0.98+

Bay AreaLOCATION

0.98+

two reasonsQUANTITY

0.98+

eachQUANTITY

0.98+

one applicationQUANTITY

0.98+

SnowflakeTITLE

0.98+

firstQUANTITY

0.98+

more than a few monthsQUANTITY

0.97+

SASORGANIZATION

0.97+

ARMORGANIZATION

0.97+

MicrosoftORGANIZATION

0.97+

Bharath Chari, Confluent & Sam Kassoumeh, SecurityScorecard | AWS Startup Showcase S2 E4


 

>>Hey everyone. Welcome to the cubes presentation of the AWS startup showcase. This is season two, episode four of our ongoing series. That's featuring exciting startups within the AWS ecosystem. This theme, cybersecurity protect and detect against threats. I'm your host. Lisa Martin. I've got two guests here with me. Please. Welcome back to the program. Sam Kam, a COO and co-founder of security scorecard and bar Roth. Charri team lead solutions marketing at confluent guys. It's great to have you on the program talking about cybersecurity. >>Thanks for having us, Lisa, >>Sam, let's go ahead and kick off with you. You've been on the queue before, but give the audience just a little bit of context about security scorecard or SSC as they're gonna hear it referred to. >>Yeah. AB absolutely. Thank you for that. Well, the easiest way to, to put it is when people wanna know about their credit risk, they consult one of the major credit scoring companies. And when companies wanna know about their cybersecurity risk, they turn to security scorecard to get that holistic view of, of, of the security posture. And the way it works is SSC is continuously 24 7 collecting signals from across the entire internet. I entire IPV four space and they're doing it to identify vulnerable and misconfigured digital assets. And we were just looking back over like a three year period. We looked from 2019 to 2022. We, we, we assessed through our techniques over a million and a half organizations and found that over half of them had at least one open critical vulnerability exposed to the internet. What was even more shocking was 20% of those organizations had amassed over a thousand vulnerabilities each. >>So SSC we're in the business of really building solutions for customers. We mine the data from dozens of digital sources and help discover the risks and the flaws that are inherent to their business. And that becomes increasingly important as companies grow and find new sources of risk and new threat vectors that emerge on the internet for themselves and for their vendor and business partner ecosystem. The last thing I'll mention is the platform that we provide. It relies on data collection and processing to be done in an extremely accurate and real time way. That's a key for that's allowed us to scale. And in order to comp, in order for us to accomplish this security scorecard engineering teams, they used a really novel combination of confluent cloud and confluent platform to build a really, really robust data for streaming pipelines and the data streaming pipelines enabled by confluent allow us at security scorecard to collect the data from a lot of various sources for risk analysis. Then they get feer further analyzed and provided to customers as a easy to understand summary of analytics. >>Rob, let's bring you into the conversation, talk about confluent, give the audience that overview and then talk about what you're doing together with SSC. >>Yeah, and I wanted to say Sam did a great job of setting up the context about what confluent is. So, so appreciate that, but a really simple way to think about it. Lisa is confident as a data streaming platform that is pioneering a fundamentally new category of data infrastructure that is at the core of what SSE does. Like Sam said, the key is really collect data accurately at scale and in real time. And that's where our cloud native offering really empowers organizations like SSE to build great customer experiences for their customers. And the other thing we do is we also help organizations build a sophisticated real time backend operations. And so at a high level, that's the best way to think about comfort. >>Got it. But I'll talk about data streaming, how it's being used in cyber security and what the data streaming pipelines enable enabled by confluent allow SSE to do for its customers. >>Yeah, I think Sam can definitely share his thoughts on this, but one of the things I know we are all sort of experiencing is the, is the rise of cyber threats, whether it's online from a business B2B perspective or as consumers just be our data and, and the data that they're generating and the companies that have access to it. So as the, the need to protect the data really grows companies and organizations really need to effectively detect, respond and protect their environments. And the best way to do this is through three ways, scale, speed, and cost. And so going back to the points I brought up earlier with conference, you can really gain real time data ingestion and enable those analytics that Sam talked about previously while optimizing for cost scale. So those are so doing all of this at the same time, as you can imagine, is, is not easy and that's where we Excel. >>And so the entire premise of data streaming is built on the concepts. That data is not static, but constantly moving across your organization. And that's why we call it data streams. And so at its core, we we've sort of built or leveraged that open source foundation of APA sheet Kafka, but we have rearchitected it for the cloud with a totally new cloud native experience. And ultimately for customers like SSE, we have taken a away the need to manage a lot of those operational tasks when it comes to Apache Kafka. The other thing we've done is we've added a ton of proprietary IP, including security features like role based access control. I mean, some prognosis talking about, and that really allows you to securely connect to any data no matter where it resides at scale at speed. And it, >>Can you talk about bar sticking with you, but some of the improvements, and maybe this is a actually question for Sam, some of the improvements that have been achieved on the SSC side as a result of the confluent partnership, things are much faster and you're able to do much more understand, >>Can I, can Sam take it away? I can maybe kick us off and then breath feel, feel free to chime in Lisa. The, the, the, the problem that we're talking about has been for us, it was a longstanding challenge. We're about a nine year old company. We're a high growth startup and data collection has always been in, in our DNA. It's at it's at the core of what we do and getting, getting the insights, the, and analytics that we synthesize from that data into customer's hands as quickly as possible is the, is the name of the game because they're trying to make decisions and we're empowering them to make those decisions faster. We always had challenges in, in the arena because we, well partners like confluent didn't didn't exist when we started scorecard when, when we we're a customer. But we, we, we think of it as a partnership when we found confluent technology and you can hear it from Barth's description. >>Like we, we shared a common vision and they understood some of the pain points that we were experiencing on a very like visceral and intimate level. And for us, that was really exciting, right? Just to have partners that are there saying, we understand your problem. This is exactly the problem that we're solving. We're, we're here to help what the technology has done for us since then is it's not only allowed us to process the data faster and get the analytics to the customer, but it's also allowed us to create more value for customers, which, which I'll talk about in a bit, including new products and new modules that we didn't have the capabilities to deliver before. >>And we'll talk about those new products in a second exciting stuff coming out there from SSC, bro. Talk about the partnership from, from confluence perspective, how has it enabled confluence to actually probably enhance its technology as a result of seeing and learning what SSC is able to do with the technology? >>Yeah, first of all, I, I completely agree with Sam it's, it's more of a partnership because like Sam said, we sort of shared the same vision and that is to really make sure that organizations have access to the data. Like I said earlier, no matter where it resides so that you can scan and identify the, the potential security security threads. I think from, from our perspective, what's really helped us from the perspective of partnering with SSE is just looking at the data volumes that they're working with. So I know a stat that we talked about recently was around scanning billions of records, thousands of ports on a daily basis. And so that's where, like I, like I mentioned earlier, our technology really excels because you can really ingest and amplify the volumes of data that you're processing so that you can scan and, and detect those threats in real time. >>Because I mean, especially the amount of volume, the data volume that's increasing on a year by basis, that aspect in order to be able to respond quickly, that is paramount. And so what's really helped us is just seeing what SSE is doing in terms of scanning the, the web ports or the data systems that are at are at potential risk. Being able to support their use cases, whether it's data sharing between their different teams internally are being able to empower customers, to be able to detect and scan their data systems. And so the learning for us is really seeing how those millions and billions of records get processed. >>Got it sounds like a really synergistic partnership that you guys have had there for the last year or so, Sam, let's go back over to you. You mentioned some new products. I see SSC just released a tax surface intelligence product. That's detecting thousands of vulnerabilities per minute. Talk to us about that, the importance of that, and another release that you're making. >>There are some really exciting products that we have released recently and are releasing at security scorecard. When we think about, when we think about ratings and risk, we think about it not just for our companies or our third parties, but we think about it in a, in a broader sense of an, of an ecosystem, because it's important to have data on third parties, but we also want to have the data on their third parties as well. No, nobody's operating in a vacuum. Everybody's operating in this hyper connected ecosystem and the risk can live not just in the third parties, but they might be storing processing data in a myriad of other technological solutions, which we want to understand, but it's really hard to get that visibility because today the way it's done is companies ask their third parties. Hey, send me a list of your third parties, where my data is stored. >>It's very manual, it's very labor intensive, and it's a trust based exercise that makes it really difficult to validate. What we've done is we've developed a technology called a V D automatic vendor detection. And what a V D does is it goes out and for any company, your own company or another business partner that you work with, it will go detect all of the third party connections that we see that have a live network connection or data connection to an organization. So that's like an awareness and discovery tool because now we can see and pull the veil back and see what the bigger ecosystem and connectivity looks like. Thus allowing the customers to go hold accountable, not just the third parties, but their fourth parties, fifth parties really end parties. And they, and they can only do that by using scorecard. The attack surface intelligence tool is really exciting for us because well, be before security scorecard people thought what we were doing was fairly, I impossible. >>It was really hard to get instant visibility on any company and any business partner. And at the same time, it was of critical importance to have that instant visibility into the risk because companies are trying to make faster decisions and they need the risk data to steer those decisions. So when I think about, when I think about that problem in, in managing sort of this evolving landscape, what it requires is it requires insightful and actionable, real time security data. And that relies on a couple things, talent and tech on the talent side, it starts with people. We have an amazing R and D team. We invest heavily. It's the heartbeat of what we do. That team really excels in areas of data collection analysis and scaling large data sets. And then we know on the tech side, well, we figured out some breakthrough techniques and it also requires partners like confluent to help with the real time streaming. >>What we realized was those capabilities are very desired in the market. And we created a new product from it called the tech surface intelligence. A tech surface intelligence focuses less on the rating. There's, there's a persona on users that really value the rating. It's easy to understand. It's a bridge language between technical and non-technical stakeholders. That's on one end of the spectrum on the other end of the spectrum. There's customers and users, very technical customers and users that may not have as much interest in a layman's rating, but really want a deep dive into the strong threat Intel data and capabilities and insights that we're producing. So we produced ASI, which stands for attack surface intelligence that allows customers to look at the surface area of attack all of the digital assets for any organization and see all of the threats, vulnerabilities, bad actors, including sometimes discoveries of zero day vulnerabilities that are, that are out in the wild and being exploited by bad guys. So we have a really strong pulse on what's happening on the internet, good and bad. And we created that product to help service a market that was interested in, in going deep into the data. >>So it's >>So critical. Go >>Ahead to jump in there real quick, because I think the points that Sam brought up, we had a great, great discussion recently while we were building on the case study that I think brings this to life, going back to the AVD product that Sam talked about and, and Sam can probably do a better job of walking through the story, but the way I understand it, one of security scorecards customers approached them and told them that they had an issue to resolve and what they ended up. So this customer was using an AVD product at the time. And so they said that, Hey, the car SSE, they said, Hey, your product shows that we used, you were using HubSpot, but we stopped using that age server. And so I think when SSE investigated, they did find a very recent HubSpot ping being used by the marketing team in this instance. And as someone who comes from that marketing background, I can raise my hand and said, I've been there, done that. So, so yeah, I mean, Sam can probably share his thoughts on this, but that's, I think the great story that sort of brings this all to life in terms of how actually customers go about using SSCs products. >>And Sam, go ahead on that. It sounds like, and one of the things I'm hearing that is a benefit is reduction in shadow. It, I'm sure that happens so frequently with your customers about Mar like a great example that you gave of, of the, the it folks saying we don't use HubSpot, have it in years marketing initiates an instance. Talk about that as some of the benefits in it for customers reducing shadow it, there's gotta be many more benefits from a security perspective. >>Yeah, the, there's a, there's a big challenge today because the market moved to the cloud and that makes it really easy for anybody in an organization to go sign, sign up, put in a credit card, or get a free trial to, to any product. And that product can very easily connect into the corporate system and access the data. And because of the nature of how cloud products work and how easy they are to sign up a byproduct of that is they sort of circumvent a traditional risk assessment process that, that organizations go through and organizations invest a, a lot of money, right? So there's a lot of time and money and energy that are invested in having good procurement risk management life cycles, and making sure that contracts are buttoned up. So on one side you have companies investing loads of energy. And then on the other side, any employee can circumvent that process by just going and with a few clicks, signing up and purchasing a product. >>And that's, and, and, and then that causes a, a disparity and Delta between what the technology and security team's understanding is of the landscape and, and what reality is. And we're trying to close that gap, right? We wanna close and reduce any windows of time or opportunity where a hacker can go discover some misconfigured cloud asset that somebody signed up for and maybe forgot to turn off. I mean, it's a lot of it is just human error and it, and it happens the example that Barra gave, and this is why understanding the third parties are so important. A customer contacted us and said, Hey, you're a V D detection product has an error. It's showing we're using a product. I think it was HubSpot, but we stopped using that. Right. And we don't understand why you're still showing it. It has to be a false positive. >>So we investigated and found that there was a very recent live HubSpot connection, ping being made. Sure enough. When we went back to the customer said, we're very confident the data's accurate. They looked into it. They found that the marketing team had started experimenting with another instance of HubSpot on the side. They were putting in real customer data in that instance. And it, it, you know, it triggered a security assessment. So we, we see all sorts of permutations of it, large multinational companies spin up a satellite office and a contractor setting up the network equipment. They misconfigure it. And inadvertently leave an administrator portal to the Cisco router exposed on the public internet. And they forget to turn off the administrative default credentials. So if a hacker stumbles on that, they can ha they have direct access to the network. We're trying to catch those things and surface them to the client before the hackers find it. >>So we're giving 'em this, this hacker's eye view. And without the continuous data analysis, without the stream processing, the customer wouldn't have known about those risks. But if you can automatically know about the risks as they happen, what that does is that prevents a million shoulder taps because the customer doesn't have to go tap on the marketing team's shoulder and go tap on employees and manually interview them. They have the data already, and that can be for their company. That can be for any company they're doing business with where they're storing and processing data. That's a huge time savings and a huge risk reduction, >>Huge risk reduction. Like you're taking blinders off that they didn't even know were there. And I can imagine Sam tune in the last couple of years, as SAS skyrocketed the use of collaboration tools, just to keep the lights on for organizations to be able to communicate. There's probably a lot of opportunity in your customer base and perspective customer base to engage with you and get that really full 360 degree view of their entire organization. Third parties, fourth parties, et cetera. >>Absolutely. Absolutely. CU customers are more engaged than they've ever been because that challenge of the market moving to the cloud, it hasn't stopped. We've been talking about it for a long time, but there's still a lot of big organizations that are starting to dip their toe in the pool and starting to cut over from what was traditionally an in-house data center in the basement of the headquarters. They're, they're moving over to the cloud. And then on, on top of that cloud providers like Azure, AWS, especially make it so easy for any company to go sign up, get access, build a product, and launch that product to the market. We see more and more organizations sitting on AWS, launching products and software. The, the barrier to entry is very, very low. And the value in those products is very, very high. So that's drawing the attention of organizations to go sign up and engage. >>The challenge then becomes, we don't know who has control over this data, right? We don't have know who has control and visibility of our data. We're, we're bringing that to surface and for vendors themselves like, especially companies that sit in AWS, what we see them doing. And I think Lisa, this is what you're alluding to. When companies engage in their own scorecard, there's a bit of a social aspect to it. When they look good in our platform, other companies are following them, right? So now all of the sudden they can make one motion to go look good, make their scorecard buttoned up. And everybody who's looking at them now sees that they're doing the right things. We actually have a lot of vendors who are customers, they're winning more competitive bakeoffs and deals because they're proving to their clients faster that they can trust them to store the data. >>So it's a bit of, you know, we're in a, two-sided kind of market. You have folks that are assessing other folks. That's fun to look at others and see how they're doing and hold them accountable. But if you're on the receiving end, that can be stressful. So what we've done is we've taken the, that situation and we've turned it into a really positive and productive environment where companies, whether they're looking at someone else or they're looking at themselves to prove to their clients, to prove to the board, it turns into a very productive experience for them >>One. Oh >>Yeah. That validation. Go ahead, bro. >>Really. I was gonna ask Sam his thoughts on one particular aspect. So in terms of the industry, Sam, that you're seeing sort of really moving to the cloud and like this need for secure data, making sure that the data can be trusted. Are there specific like verticals that are doing that better than the others? Or do you see that across the board? >>I think some industries have it easier and some industries have it harder, definitely in industries that are, I think, health, healthcare, financial services, a absolutely. We see heavier activity there on, on both sides, right? They they're, they're certainly becoming more and more proactive in their investments, but the attacks are not stopping against those, especially healthcare because the data is so valuable and historically healthcare was under, was an underinvested space, right. Hospitals. And we're always strapped for it folks. Now, now they're starting to wake up and pay very close attention and make heavier investments. >>That's pretty interesting. >>Tremendous opportunity there guys. I'm sorry. We are out of time, but this is such an interesting conversation. You see, we keep going, wanna ask you both where can, can prospective interested customers go to learn more on the SSC side, on the confluence side, through the AWS marketplace? >>I let some go first. >>Sure. Oh, thank thank, thank you. Thank you for on the security scorecard side. Well look, security scorecard is with the help of Colu is, has made it possible to instantly rate the security posture of any company in the world. We have 12 million organizations rated today and, and that, and that's going up every day. We invite any company in the world to try security scorecard for free and experience how, how easy it is to get your rating and see the security rating of, of any company and any, any company can claim their score. There's no, there's no charge. They can go to security, scorecard.com and we have a special, actually a special URL security scorecard.com/free-account/aws marketplace. And even better if someone's already on AWS, you know, you can view our security posture with the AWS marketplace, vendor insights, plugin to quickly and securely procure your products. >>Awesome. Guys, this has been fantastic information. I'm sorry, bro. Did you wanna add one more thing? Yeah. >>I just wanted to give quick call out leads. So anyone who wants to learn more about data streaming can go to www confluent IO. There's also an upcoming event, which has a separate URL. That's coming up in October where you can learn all about data streaming and that URL is current event.io. So those are the two URLs I just wanted to quickly call out. >>Awesome guys. Thanks again so much for partnering with the cube on season two, episode four of our AWS startup showcase. We appreciate your insights and your time. And for those of you watching, thank you so much. Keep it right here for more action on the, for my guests. I am Lisa Martin. We'll see you next time.

Published Date : Sep 7 2022

SUMMARY :

It's great to have you on the program talking about cybersecurity. You've been on the queue before, but give the audience just a little bit of context about And the way it works the flaws that are inherent to their business. Rob, let's bring you into the conversation, talk about confluent, give the audience that overview and then talk about what a fundamentally new category of data infrastructure that is at the core of what what the data streaming pipelines enable enabled by confluent allow SSE to do for And so going back to the points I brought up earlier with conference, And so the entire premise of data streaming is built on the concepts. It's at it's at the core of what we do and getting, Just to have partners that are there saying, we understand your problem. Talk about the partnership from, from confluence perspective, how has it enabled confluence to So I know a stat that we talked about And so the learning for us is really seeing how those millions and billions Talk to us about that, the importance of that, and another release that you're making. and the risk can live not just in the third parties, Thus allowing the customers to go hold accountable, not just the third parties, And at the same time, it was of critical importance to have that instant visibility into the risk because And we created a new product from it called the tech surface intelligence. So critical. to resolve and what they ended up. Talk about that as some of the benefits in it for customers reducing shadow it, And because of the nature I mean, it's a lot of it is just human error and it, and it happens the example that Barra gave, And they forget to turn off the administrative default credentials. a million shoulder taps because the customer doesn't have to go tap on the marketing team's shoulder and go tap just to keep the lights on for organizations to be able to communicate. because that challenge of the market moving to the cloud, it hasn't stopped. So now all of the sudden they can make one motion to go look to prove to the board, it turns into a very productive experience for them Go ahead, bro. need for secure data, making sure that the data can be trusted. Now, now they're starting to wake up and pay very close attention and make heavier investments. learn more on the SSC side, on the confluence side, through the AWS marketplace? They can go to security, scorecard.com and we have a special, Did you wanna add one more thing? can go to www confluent IO. And for those of you watching,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamPERSON

0.99+

Lisa MartinPERSON

0.99+

Sam KamPERSON

0.99+

LisaPERSON

0.99+

Sam KassoumehPERSON

0.99+

OctoberDATE

0.99+

20%QUANTITY

0.99+

2019DATE

0.99+

SSEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

millionsQUANTITY

0.99+

two guestsQUANTITY

0.99+

SSCORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

RobPERSON

0.99+

HubSpotORGANIZATION

0.99+

ExcelTITLE

0.99+

CiscoORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

2022DATE

0.99+

last yearDATE

0.99+

fifth partiesQUANTITY

0.99+

Bharath ChariPERSON

0.99+

both sidesQUANTITY

0.99+

SASORGANIZATION

0.99+

thousandsQUANTITY

0.98+

over a million and a half organizationsQUANTITY

0.98+

three yearQUANTITY

0.98+

APATITLE

0.98+

todayDATE

0.98+

billions of recordsQUANTITY

0.98+

thousands of portsQUANTITY

0.97+

secondQUANTITY

0.97+

oneQUANTITY

0.97+

bothQUANTITY

0.97+

ColuORGANIZATION

0.97+

fourth partiesQUANTITY

0.96+

two URLsQUANTITY

0.96+

over a thousand vulnerabilitiesQUANTITY

0.96+

www confluent IOOTHER

0.95+

zero dayQUANTITY

0.95+

BarthPERSON

0.95+

IntelORGANIZATION

0.93+

scorecard.comOTHER

0.93+

one more thingQUANTITY

0.91+

SSETITLE

0.89+

firstQUANTITY

0.89+

BarraORGANIZATION

0.88+

24 7QUANTITY

0.87+

12 million organizationsQUANTITY

0.85+

Keynote Enabling Business and Developer Success | Open Cloud Innovations


 

(upbeat music) >> Hello, and welcome to this startup showcase. It's great to be here and talk about some of the innovations we are doing at AWS, how we work with our partner community, especially our open source partners. My name is Deepak Singh. I run our compute services organization, which is a very vague way of saying that I run a number of things that are connected together through compute. Very specifically, I run a container services organization. So for those of you who are into containers, ECS, EKS, fargate, ECR, App Runner Those are all teams that are within my org. I also run the Amazon Linux and BottleRocketing. So anything AWS does with Linux, both externally and internally, as well as our high-performance computing team. And perhaps very relevant to this discussion, I run the Amazon open source program office. Serving at AWS for over 13 years, almost 14, involved with compute in various ways, including EC2. What that has done has given me a vantage point of seeing how our customers use the services that we build for them, how they leverage various partner solutions, and along the way, how AWS itself has gotten involved with opensource. And I'll try and talk to you about some of those factors and how they impact, how you consume our services. So why don't we get started? So for many of you, you know, one of the things, there's two ways to look at AWS and open-source and Amazon in general. One is the number of contributors you may have. And the number of repositories that contribute to. Those are just a couple of measures. There are people that I work with on a regular basis, who will remind you that, those are not perfect measures. Sometimes you could just contribute to one thing and have outsized impact because of the nature of that thing. But it address being what it is, increasingly we'll look at different ways in which we can help contribute and enhance open source 'cause we consume a lot of it as well. I'll talk about it very specifically from the space that I work in the container space in particular, where we've worked a lot with people in the Kubernetes community. We've worked a lot with people in the broader CNCF community, as well as, you know, small projects that our customers might have got started off with. For example, I want to like talking about is Argo CD from Intuit. We were very actively involved with helping them figure out what to do with it. And it was great to see how into it. And we worked, etc, came together to think about get-ups at the Kubernetes level. And while those are their projects, we've always been involved with them. So we try and figure out what's important to our customers, how we can help and then take because of that. Well, let's talk about a little bit more, here's some examples of the kinds of open source projects that Amazon and AWS contribute to. They arranged from the open JDK. I think we even now have our own implementation of Java, the Corretto open source project. We contribute to projects like rust, where we are very active in the rest foundation from a leadership role as well, the robot operating system, just to pick some, we collaborate with Facebook and actively involved with the pirates project. And there's many others. You can see all the logos in here where we participate either because they're important to us as AWS in the services that we run or they're important to our customers and the services that they consume or the open source projects they care about and how we get to those. How we get and make those decisions is often depends on the importance of that particular project. At that point in time, how much impact they're having to AWS customers, or sometimes very feel that us contributing to that project is super critical because it helps us build more robust services. I'll talk about it in a completely, you know, somewhat different basis. You may have heard of us talk about our new next generation of Amazon Linux 2022, which is based on fedora as its sub stream. One of the reasons we made this decision was it allows us to go and participate in the preneurial project and make sure that the upstream project is robust, stays robust. And that, that what that ends up being is that Amazon Linux 2022 will be a robust operating system with the kinds of capabilities that our customers are asking for. That's just one example of how we think about it. So for example, you know, the Python software foundation is something that we work with very closely because so many of our customers use Python. So we help run something like PyPy which is many, you know, if you're a Python developer, I happened to be a Ruby one, but lots of our customers use Python and helping the Python project be robust by making sure PyPy is available to everybody is something that we help provide credits for help support in other ways. So it's not just code. It can mean many different ways of contributing as well, but in the end code and operations is where we hang our happens. Good examples of this is projects that we will create an open source because it makes sense to make sure that we open source some of the core primitives or foundations that are part of our own services. A great example of that, whether this be things that we open source or things that we contribute to. And I'll talk about both and I'll talk about things near and dear to my heart. There's many examples I've picked the two that I like talking about. The first of these is firecracker. Many of you have heard about it, a firecracker for those of you who don't know is a very lightweight virtual machine manager, which allows you to run these micro VMs. And why was this important many years ago when we started Lambda and quite honestly, Fugate and foggy, it still runs quite a bit in that mode, we used to have to run on VMs like everything else and finding the right VM for the size of tasks that somebody asks for the size of function that somebody asks for is requires us to provision capacity ahead of time. And it also wastes a lot of capacity because Lambda function is small. You won't even if you find the smallest VM possible, those can be a little that can be challenging. And you know, there's a lot of resources that are being wasted. VM start at a particular speed because they have to do a whole bunch of things before the operating system spins up and the virtual machine spins up and we asked ourselves, can we do better? come up with something that allows us to create right size, very lightweight, very fast booting. What's your machines, micro virtual machine that we ended up calling them. That's what led to firecracker. And we open source the project. And today firecrackers use, not just by AWS Lambda or foggy, but by a number of other folks, there's companies like fly IO that are using it. We know people using firecracker to run Kubernetes on prem on bare metal as an example. So we've seen a lot of other folks embrace it and use it as the foundation for building their own serverless services, their own container services. And we think there's a lot of value and learnings that we can bring to the table because we get the experience of operating at scale, but other people can bring to the table cause they may have specific requirements that we may not find it as important from an AWS perspective. So that's firecracker an example of a project where we contribute because we feel it's fundamentally important to us as continually. We were found, you know, we've been involved with continuity from the beginning. Today, we are a whole team that does nothing else, but contribute to container D because container D underlies foggy. It underlies our Kubernetes offerings. And it's increasingly being used by customers directly by their placement. You know, where they're running container D instead of running a full on Docker or similar container engine, what it has allowed us to do is focus on what's important so that we can operate continuously at scale, keep it robust and secure, add capabilities to it that AWS customers need manifested often through foggy Kubernetes, but in the end, it's a win-win for everybody. It makes continuously better. If you want to use containers for yourself on AWS, that's a great way to you. You know, you still, you still benefit from all the work that we're doing. The decision we took was since it's so important to us and our customers, we wanted a team that lived in breathed container D and made sure a super robust and there's many, many examples like that. No, that we ended up participating in, either by taking a project that exists or open sourcing our own. Here's an example of some of the open source projects that we have done from an AWS on Amazon perspective. And there's quite a few when I was looking at this list, I was quite surprised, not quite surprised I've seen the reports before, but every time I do, I have to recount and say, that's a lot more than one would have thought, even though I'd been looking at it for such a long time, examples of this in my world alone are things like, you know, what work had to do with Amazon Linux BottleRocket, which is a container host operating system. That's been open-sourced from day one. Firecracker is something we talked about. We have a project called AWS peril cluster, which allows you to spin up high performance computing clusters on AWS using the kind of schedulers you may use to use like slum. And that's an open source project. We have plenty of source projects in the web development space, in the security space. And more recently things like the open 3d engine, which is something that we are very excited about and that'd be open sourced a few months ago. And so there's a number of these projects that cover everything from tooling to developer, application frameworks, all the way to database and analytics and machine learning. And you'll notice that in a few areas, containers, as an example, machine learning as an example, our default is to go with open source option is where we can open source. And it makes sense for us to do so where we feel the product community might benefit from it. That's our default stance. The CNCF, the cloud native computing foundation is something that we've been involved with quite a bit. You know, we contribute to Kubernetes, be contribute to Envoy. I talked about continuity a bit. We've also contributed projects like CDK 8, which marries the AWS cloud development kit with Kubernetes. It's now a sandbox project in Kubernetes, and those are some of the areas. CNCF is such a wide surface area. We don't contribute to everything, but we definitely participate actively in CNCF with projects like HCB that are critical to eat for us. We are very, very active in just how the project evolves, but also try and see which of the projects that are important to our customers who are running Kubernetes maybe by themselves or some other project on AWS. Envoy is a good example. Kubernetes itself is a good example because in the end, we want to make sure that people running Kubernetes on AWS, even if they are not using our services are successful and we can help them, or we can work on the projects that are important to them. That's kind of how we think about the world. And it's worked pretty well for us. We've done a bunch of work on the Kubernetes side to make sure that we can integrate and solve a customer problem. We've, you know, from everything from models to work that we have done with gravity on our arm processor to a virtual GPU plugin that allows you to share and media GPU resources to the elastic fabric adapter, which are the network device for high performance computing that it can use at Kubernetes on AWS, along with things that directly impact Kubernetes customers like the CDKs project. I talked about work that we do with the container networking interface to the Amazon control of a Kubernetes, which is an open source project that allows you to use other AWS services directly from Kubernetes clusters. Again, you notice success, Kubernetes, not EKS, which is a managed Kubernetes service, because if we want you to be successful with Kubernetes and AWS, whether using our managed service or running your own, or some third party service. Similarly, we worked with premetheus. We now have a managed premetheus service. And at reinvent last year, we announced the general availability of this thing called carpenter, which is a provisioning and auto-scaling engine for Kubernetes, which is also an open source project. But here's the beauty of carpenter. You don't have to be using EKS to use it. Anyone running Kubernetes on AWS can leverage it. We focus on the AWS provider, but we've built it in such a way that if you wanted to take carpenter and implemented on prem or another cloud provider, that'd be completely okay. That's how it's designed and what we anticipated people may want to do. I talked a little bit about BottleRocket it's our Linux-based open-source operating system. And the thing that we have done with BottleRocket is make sure that we focus on security and the needs of customers who want to run orchestrated container, very focused on that problem. So for example, BottleRocket only has essential software needed to run containers, se Linux. I just notice it says that's the lineups, but I'm sure that, you know, Lena Torvalds will be pretty happy. And seeing that SE linux is enabled by default, we use things like DM Verity, and it has a read only root file system, no shell, you can assess it. You can install it if you wanted to. We allowed it to create different bill types, variants as we call them, you can create a variant for a non AWS resource as well. If you have your own homegrown container orchestrator, you can create a variant for that. It's designed to be used in many different contexts and all of that is open sourced. And then we use the update framework to publish and secure repository and kind of how this transactional system way of updating the software. And it's something that we didn't invent, but we have embraced wholeheartedly. It's a bottle rockets, completely open source, you know, have partners like Aqua, where who develop security tools for containers. And for them, you know, something I bought in rocket is a natural partnership because people are running a container host operating system. You can use Aqua tooling to make sure that they have a secure Indiana environment. And we see many more examples like that. You may think so over us, it's all about AWS proprietary technology because Lambda is a proprietary service. But you know, if you look peek under the covers, that's not necessarily true. Lambda runs on top of firecracker, as we've talked about fact crackers and open-source projects. So the foundation of Lambda in many ways is open source. What it also allows people to do is because Lambda runs at such extreme scale. One of the things that firecracker is really good for is running at scale. So if you want to build your own firecracker base at scale service, you can have most of the confidence that as long as your workload fits the design parameters, a firecracker, the battle hardening the robustness is being proved out day-to-day by services at scale like Lambda and foggy. For those of you who don't know service support services, you know, in the end, our goal with serverless is to make sure that you don't think about all the infrastructure that your applications run on. We focus on business logic as much as you can. That's how we think about it. And serverless has become its own quote-unquote "Sort of environment." The number of partners and open-source frameworks and tools that are spun up around serverless. In which case mostly, I mean, Lambda, API gateway. So it says like that is pretty high. So, you know, number of open source projects like Zappa server serverless framework, there's so many that have come up that make it easier for our customers to consume AWS services like Lambda and API gateway. We've also done some of our own tooling and frameworks, a serverless application model, AWS jealous. If you're a Python developer, we have these open service runtimes for Lambda, rust dot other options. We have amount of number of tools that we opened source. So in general, you'll find that tooling that we do runtime will tend to be always be open-sourced. We will often take some of the guts of the things that we use to build our systems like firecracker and open-source them while the control plane, etc, AWS services may end up staying proprietary, which is the case in Lambda. Increasingly our customers build their applications and leverage the broader AWS partner network. The AWS partner network is a network of partnerships that we've built of trusted partners. when you go to the APN website and find a partner, they know that that partner meets a certain set of criteria that AWS has developed, and you can rely on those partners for your own business. So whether you're a little tiny business that wants some function fulfill that you don't have the resources for or large enterprise that wants all these applications that you've been using on prem for a long time, and want to keep leveraging them in the cloud, you can go to APN and find that partner and then bring their solution on as part of your cloud infrastructure and could even be a systems integrator, for example, to help you solve this specific development problem that you may have a need for. Increasingly, you know, one of the things we like to do is work with an apartment community that is full of open-source providers. So a great one, there's so many, and you have, we have a panel discussion with many other partners as well, who make it easier for you to build applications on AWS, all open source and built on open source. But I like to call it a couple of them. The first one of them is TIDELIFT. TIDELIFT, For those of you who don't know is a company that provides SAS based tools to curate track, manage open source catalogs. You know, they have a whole network of maintainers and providers. They help, if you're an independent open developer, or a smart team should probably get to know TIDELIFT. They provide you benefits and, you know, capabilities as a developer and maintainer that are pretty unique and really help. And I've seen a number of our open source community embraced TIDELIFT quite honestly, even before they were part of the APN. But as part of the partner network, they get to participate in things like ISP accelerate and they get to they're officially an advanced tier partner because they are, they migrated the SAS offering onto AWS. But in the end, if you're part of the open source supply chain, you're a maintainer, you are a developer. I would recommend working with TIDELIFT because their goal is making all of you who are developing open source solutions, especially on AWS, more successful. And that's why I enjoy this partnership with them. And I'm looking to do a lot more because I think as a company, we want to make sure that open source developers don't feel like they are not supported because all you have to do is read various forums. It's challenging often to be a maintainer, especially of a small project. So I think with helping with licensing license management, security identification remediation, helping these maintainers is a big part of what TIDELIFT to us and it was great to see them as part of a partner network. Another partner that I like to call sysdig. I actually got introduced to them many years ago when they first launched. And one of the things that happened where they were super interested in some of our serverless stuff. And we've been trying to figure out how we can work together because all of our customers are interested in the capabilities that cystic provides. And over the last few years, he found a number of areas where we can collaborate. So sysdig, I know them primarily in a security company. So people use cystic to secure the bills, detect, you know, do threat response, threat detection, completely continuously validate their posture, get this continuous analytics signal on how they're doing and monitor performance. At the end of it, it's a SAS platform. They have a very nice open source security stack. The one I'm most familiar with. And I think most of you are probably familiar with is Falco. You know, sysdig, a CNCF project has been super popular. It's just to go SSS what 3, 37, 40 million downloads by now. So that's pretty, pretty cool. And they have been a great partner because we've had to do make sure that their solution works at target, which is not a natural place for their software to run, but there was enough demand and interest from our customers that, you know, or both companies leaned in to make sure they can be successful. So last year sister got a security competency. We have a number of specific competencies that we for our partners, they have integration and security hub is great. partners are lean in the way cystic has onto making our customer successful. And working with us are the best partners that we have. And there's a number of open source companies out there built on open source where their entire portfolio is built on open source software or the active participants like we are that we love working with on a day to day basis. So, you know, I think the thing I would like to, as we wind this out in this presentation is, you know, AWS is constantly looking for partnerships because our partners enable our customers. They could be with companies like Redis with Mongo, confluent with Databricks customers. Your default reaction might be, "Hey, these are companies that maybe compete with AWS." but no, I mean, I think we are partners as well, like from somebody at the lower end of the spectrum where people run on top of the services that I own on Linux and containers are SE 2, For us, these partners are just as important customers as any AWS service or any third party, 20 external customer. And so it's not a zero sum game. We look forward to working with all these companies and open source projects from an AWS perspective, a big part of how, where my open source program spends its time is making it easy for our developers to contribute, to open source, making it easy for AWS teams to decide when to open source software or participate in open source projects. Over the last few years, we've made significant changes in how we reduce the friction. And I think you can see it in the results that I showed you earlier in this stock. And the last one is one of the most important things that I say and I'll keep saying that, that we do as AWS is carry the pager. There's a lot of open source projects out there, operationalizing them, running them at scale is not easy. It's not all for whatever reason. It may not have anything to do with the software itself. But our core competency is taking that and being really good at operating it and becoming experts at operating it. And then ideally taking that expertise and experience and operating that project, that software and contributing back upstream. Cause that makes it better for everybody. And I think you'll see us do a lot more of that going forward. We've been doing that for the last few years, you know, in the container space, we do it every day. And I'm excited about the possibilities. With that. Thank you very much. And I hope you enjoy the rest of the showcase. >> Okay. Welcome back. We have Deepak sing here. We just had the keynote closing keynote vice-president of compute services. Deepak. Great to a great keynote, great wisdom and insight from that session. A very notable highlights and cutting edge trends and product information. Thanks for sharing. >> No, anytime it's always good to be here. It's too bad that we still doing this virtually, but always good to talk to you, John. >> We'll get hopefully through this way pretty quickly, I want to jump right in. Cause we don't have a lot of time. I want to get some quick question. You've brought up a good things. Open source innovation. Okay. Going next level. You've seen the rise of super clouds and super apps developing at open source. You're seeing big companies contributing, you know, you mentioned Argo into it. You're seeing that dynamic where companies are forming around this. This is a rising tide. This is, this is actually real. It's not the old school of, okay, here's a project. And then someone manages support and commercialization of it. It's actually platform in cloud scale. This is next gen. >> Yeah. And actually I think it started a few years ago. We can talk about a company that, you know, you're very familiar with as part of this event, which is armory many years ago, Netflix spun off this project called Spinnaker. A Spinnaker is CISED you know, CSED system that was developed at Netflix for their own purposes, but they chose to open solicit. And since then, it's become very popular with customers who want to use it even on prem. And you have a company that spun up on it. I think what's making this world very unique is you have very large companies like Facebook that will build things for themselves like VITAS or Netflix with Spinnaker and open source them. And you can have a lot of discussion about why they chose to do so, etc. But increasingly that's becoming the default when Amazon or Netflix or Facebook or Mehta, I guess you call them these days, build something for themselves for their own needs. The first question we ask ourselves is, should it be opensource? And increasingly we are all saying yes. And here's what happens because of that. It gives an opportunity depending on how you open source it for innovation through commercial deployments, so that you get SaaS companies, you know, that are going to take that product and make it relevant and useful to a very broad number of customers. You build partnerships with cloud providers like AWS, because our customers love this open source project and they need help. And they may choose an AWS managed service, or they may end up working with this partner on a day-to-day basis. And we want to work with that partner because they're making our customers successful, which is one reason all of us are here. So you're having this set of innovation from large companies from, you know, whether they are just consumer companies like Metta infrastructure companies like us, or just random innovation that's happening in an open source project that which ends up in companies being spun up and that foster that innovative innovation and that flywheel that's happening right now. And I think you said that like, this is unique. I mean, you never saw this happen before from so many different directions. >> It really is a nice progression on the business model side as well. You mentioned Argo, which is a great organic thing that was Intuit developed. We just interviewed code fresh. They just presented here in the showcase as well. You seeing the formation around these projects develop now in the community at a different scale. I mean, look at code fresh. I mean, Intuit did it Argo and they're not just supporting it. They're building a platform. So you seeing the dynamics of tools and now emerging the platforms, you mentioned Lambda, okay. Which is proprietary for AWS and your talk powered by open source. So again, open source combined with cloud scale allows for new potential super applications or super clouds that are developing. This is a new phenomenon. This isn't just lift and shift and host on the cloud. This is actually a construction production developer workflow. >> Yeah. And you are seeing consumers, large companies, enterprises, startups, you know, it used to be that startups would be comfortable adopting some of these solutions, but now you see companies of all sizes doing so. And I said, it's not just software it's software, the services increasingly becoming the way these are given, delivered to customers. I actually think the innovation is just getting going, which is why we have this. We have so many partners here who are all in inventing and innovating on top of open source, whether it's developed by them or a broader community. >> Yeah. I liked, I liked the represent container. Do you guys have, did that drove that you've seen a lot of changes and again, with cloud scale and open source, you seeing the dynamics change, whether you're enabling that, and then you see kind of like real big change. So let's take snowflake, a big customer of AWS. They started out as a startup too, but they weren't a data warehouse. They were bringing data warehouse like functionality and then changing everything differently and making it consumable for the cloud. And hence they're huge. So that's a disruption into an incumbent leader or sector. Then you've got new capabilities emerging. What's your thoughts, Deepak? Can you share your vision on how you have the disruption to existing leaders, old guard, if you will, as you guys call them and then new capabilities as these new platforms emerge at a net new functionality, how do you see that emerging? >> Yeah. So I speak from my side of the world. I've lived in over the last few years, which has containers and serverless, right? There's a lot of, if you go to any enterprise and ask them, do you want to modernize the infrastructure? Do you want to take advantage of automated software delivery, continuous delivery infrastructure as code modern observability, all of them will say yes, but they also are still a large enterprise, which has these enterprise level requirements. I'm using the word enterprise a lot. And I usually it's a trigger word for me because so many customers have similar requirements, but I'm using it here as large company with a lot of existing software and existing practices. I think the innovation that's coming and I see a lot of companies doing that is saying, "Hey, we understand the problems you want to solve. We understand the world where you live in, which could be regulated." You want to use all these new modalities. How do we allow you to use all of them? Keep the advantages of switching to a Lambda or switching to, and a service running on far gate, but give you the same capabilities. And I think I'll bring up cystic here because we work so closely with them on Falco. As an example, I just talked about them in my keynote. They could have just said, "Oh no, we'll just support the SE2 and be done with it." They said, "No, we're going to make sure that serverless containers in particular are something that you're going to be really good at because our customers want to use them, but requires us to think differently. And then they ended up developing new things like Falco that are born in this new world, but understand the requirements of the old world. If you get what I'm saying. And I think that a real example. >> Yeah. Oh, well, I mean, first of all, they're smart. So that was pretty obvious for most people that know, sees that you can connect the dots on serverless, which is a great point, but not everyone can see that again, this is what's new and and systig was just found in his backyard. As I found out on my interview, a great, great founder, they would do a new thing. So it was a very easy to connect the dots there again, that's the trend. Well, I got to ask if they're doing that for serverless, you mentioned graviton in your speech and what came out of you mentioned graviton in your speech and what came out of re-invent this past year was all the innovation going on at the compute level with gravitron at many levels in the Silicon. How should companies and open source developers think about how to innovate with graviton? >> Yeah, I mean, you've seen examples from people blogging and tweeting about how fast their applications run and grab it on the price performance benefits that they get, whether it's on, you know, whether it's an observability or other places. something that AWS is going to embrace across a compute something that AWS is going to embrace across a compute portfolio. Obviously you can go find EC2 instances, the gravitron two instances and run on them and that'll be great. But we know that most of our customers, many of our customers are building new applications on serverless containers and serveless than even as containers increasingly with things like foggy, where they don't want to operate the underlying infrastructure. A big part of what we're doing is to make sure that graviton is available to you on every compute modality. You can run it on a C2 forever. You've been running, being able to use ECS and EKS and run and grab it on almost since launch. What do you want me to take it a step further? You elastic Beanstalk customers, elastic Beanstalk has been around for a decade, but you can now use it with graviton. people running ECS on for gate can now use graviton. Lambda customers can pick graviton as well. So we're taking this price performance benefits that you get So we're taking this price performance benefits that you get from graviton and basically putting it across the entire compute portfolio. What it means is every high level service that gets built on compute infrastructure. And you get the price performance benefits, you get the price performance benefits of the lower power consumption of arm processes. So I'm personally excited like crazy. And you know, this has graviton 2 graviton 3 is coming. >> That's incredible. It's an opportunity like serverless was it's pretty obvious. And I think hopefully everyone will jump on that final question as the time's ticking here. I want to get your thoughts quickly. If you look at what's happened with containers over the past say eight years since the original founding of the first Docker instance, if you will, to how that's evolved and then the introduction of Kubernetes and the cloud native wave we're seeing now, what is, how would you describe the relationship between the success Docker, seeing now with Kubernetes in the cloud native construct what's different and why is this combination so successful? >> Yeah. I often say that containers would have, let me rephrase that. what I say is that people would have adopted sort of the modern way of running applications, whether containers came around or not. But the fact that containers came around made that migration and that journey is so much more efficient for people. So right from, I still remember the first doc that Solomon gave Billy announced DACA and starting to use it on customers, starting to get interested all the way to the more sort of advanced orchestration that we have now for containers across the board. And there's so many examples of the way you can do that. Kubernetes being the most, most well-known one. Here's the thing that I think has changed. I think what Kubernetes or Docker, or the whole sort of modern way of building applications has done is it's taken people who would have taken years adopting these practices and by bringing it right to the fingertips and rebuilding it into the APIs. And in the case of Kubernetes building an entire sort of software world around it, the number of, I would say number of decisions people have to take has gone smaller in many ways. There's so many options, the number of decisions that become higher, but the com the speed at which they can get to a result and a production version of an application that works for them is way low. I have not seen anything like what I've seen in the last 6, 7, 8 years of how quickly the most you know, the most I would say is, you know, a company that you would think would never adopt modern technology has been able to go from, this is interesting to getting a production really quickly. And I think it's because the tooling makes it So, and the fact that you see the adoption that you see right and the fact that you see the adoption that you see right from the fact that you could do Docker run Docker, build Docker, you know, so easily back in the day, all the way to all the advanced orchestration you can do with container orchestrator is today. sort of taking all of that away as well. there's never been a better time to be a developer independent of whatever you're trying to build. And I think containers are a big central part of why that's happened. >> Like the recipe, the combination of cloud-scale, the timing of Kubernetes and the containerization concepts just explode as a beautiful thing. And it creates more opportunities and will challenges, which are opportunities that are net new, but it solves the automation piece that we're seeing this again, it's only makes things go faster. >> Yes. >> And that's the key trend. Deepak, thank you so much for coming on. We're seeing tons of open cloud innovations, thanks to the success of your team at AWS and being great participants in the community. We're seeing innovations from startups. You guys are helping enabling that. Of course, they want to live on their own and be successful and build their super clouds and super app. So thank you for spending the time with us. Appreciate. >> Yeah. Anytime. And thank you. And you know, this is a great event. So I look forward to people running software and building applications, using AWS services and all these wonderful partners that we have. >> Awesome, great stuff. Great startups, great next generation leaders emerging. When you see startups, when they get successful, they become the modern software applications platforms out there powering business and changing the world. This is the cube you're watching the AWS startup showcase. Season two episode one open cloud innovations on John Furrier your host, see you next time.

Published Date : Jan 26 2022

SUMMARY :

And the thing that we have We just had the keynote closing but always good to talk to you, John. It's not the old school And I think you said that So you seeing the dynamics but now you see companies and then you see kind How do we allow you to use all of them? sees that you can connect is available to you on Kubernetes and the cloud of the way you can do that. but it solves the automation And that's the key trend. And you know, and changing the world.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DeepakPERSON

0.99+

Lena TorvaldsPERSON

0.99+

FalcoORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

JohnPERSON

0.99+

Deepak SinghPERSON

0.99+

MehtaORGANIZATION

0.99+

twoQUANTITY

0.99+

FacebookORGANIZATION

0.99+

LambdaTITLE

0.99+

firstQUANTITY

0.99+

John FurrierPERSON

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

SolomonPERSON

0.99+

two waysQUANTITY

0.99+

OneQUANTITY

0.99+

PyPyTITLE

0.99+

last yearDATE

0.99+

over 13 yearsQUANTITY

0.99+

LinuxTITLE

0.99+

TodayDATE

0.99+

IndianaLOCATION

0.99+

DatabricksORGANIZATION

0.99+

bothQUANTITY

0.99+

Raziel Tabib & Dan Garfield, Codefresh | AWS Startup Showcase S2 E1 | Open Cloud Innovations


 

(bright music) >> Hi, everyone. Welcome to the CUBE's presentation of the AWS Startup Showcase around open cloud innovations. It's the season two episode one of the ongoing series covering exciting startups from the AWS ecosystem and talking about open source and innovation. I'm John Furrier, your host. Today, we're joined by two great guests. Dan Garfield, chief open source officer and co-founder of Codefresh IO, and Raziel Tabib, CEO and co-founder. Two co-founders in the middle of all the innovation. Gentlemen thanks for coming on. >> Thank you. >> So you guys have a great platform and as cloud native goes mainstream in the enterprise and for developers, the big topic is unification, end-to-end, horizontally scalable, leveraging data. All these things around agile that I call agile cloud next level. This is kind of what we're seeing. The CNCF is growing. You've seen KubeCon every year is more about these kinds of things. Words like orchestration, Kubernetes, container, security. All of those complexities are now at the center of making things easier for developers. This is a key value proposition and you guys at Codefresh are offering really the first enterprise delivery solution powered by Argo, which is an open source project. Again, open source driving really big changes. So let's get into it. And first of all, congratulations, and thanks for working on this project. What's so special about- >> Thank you for that. >> Argo the project, and why have you guys decided to build a platform on it, and where is this coming together? Take us through why this is so important. >> I think Argo has been a very fast growing open source project for multiple reasons. A, it has been built for the new way of building and deploying an application. It's cloud native. You mentioned Kubernetes becoming kind of the de facto way of running application. It's the de facto way to run automation and pipeline. But also Argo has been built from the ground up to the latest practices of how we deploy software. We deploy software now differently. We deploy it using a GitOps practice. We're deploying it using canary blue-green progressive deployment. And Argo has been built around these practices, around these technologies, and has been very much widely adopted by the community. In the past, the KubeCon you've mentioned, Argo was all over the place. And we were very glad to be working with the community to talk about what the next steps with Argo. >> Yeah, it's a really good point. I would like to just follow up on that because you see this being talked about. It always comes up, where is open source really outside of a pure contributors matter? And when you have corporations contributing, you seeing this has been the trend. You saw it with Lyft, with Envoy, companies doing more and more open source. This is part of a big collaboration. And again, this comes back down to this whole why it's relevant and why it's so special with Argo. Continue to talk about relationship because it's not just you guys, it's now community. >> Yeah, I can speak to that. The Argo project is something that we maintain in partnership with several other companies and really our relationship with it is that this is something that we're actively contributing to. This is something that we're helping build the roadmap on and planning the events around and all those kinds of things. And we're doing that because we really believe in this technology and we've built our platform on it. So when you deploy Codefresh, you're deploying technology that's built directly on Argo and is designed specifically to solve that problem that you spoke to at the top of the hour. We all want to deliver software faster. We all want to have fewer regressions. We want to have fewer breaking changes. We want software to be super reliable. We want to be comfortable with what we're doing. That's really why we picked Argo because that technology that we have it is to Raziel's point delivered in this new way. It's delivered using GitOps. And that's a whole revolution and change in the way that people build and deploy software. And bringing cohesion into that experience is so critical to building the confidence that lets you actually deploy often and frequently and more. >> Dan, if you don't mind just expanding on that one point about the problem you solve, because to me, this has been kind of that evolution. It's almost like, yeah, there's been problems, plural, and opportunities that you saw with those in growing markets like this with DevOps and DevSecOps and now cloud native. What is the catalyst behind all of this? What was the epiphany behind it? How did it get so much momentum? What was it really doing under the covers? >> Well, it's a very simple and easy to use set of tools. And that's one of the big things is that if you look at the ideas of GitOps and there's actually a foundation around this that were part of called open GitOps to GitOps working group under the CNCF. And those principles of, I want to, yes, do my software defined as code. I want to do my infrastructure defined as code and I need something monitoring by production run times and making sure that the declared desired state is always matching the actual state. Those principles have actually been around for a number of years. And with Kubernetes, we really unlocked an API that allowed us to start doing GitOps and this is why we bring in Argo and you see the rise of Argo CD and other workflows and what we've been doing is really because that technology has been unlocked now. So the ability to define how your software is supposed to run and now your entire software delivery stack should run, all defined and then monitored and then kept in check using the GitOps operator. That critical unlock is what's really driving the massive adoption. And like Raziel said, Argo is the fastest growing and most popular open source project for delivering software. And it's not even close. >> Yeah, this is really great point. And I want to get into that 'cause I want to know why, what you guys do on your platform versus the open source and get that relationship settled? Before we get there, though, I want to get your reaction to some of the commentary in the industry 'cause GitOps trend has been exploding into new directions. I mean, it used to be a term about 10 years ago called big data. And at the beginning where data was all big data. Now it was DevOps revolution around data as well. But now you're hearing people talk about big code. Like, I mean, the code bases are becoming so huge. So as a developer, you're leveraging large open source code. This idea of the software delivery with existing code and new code just adds to more code. There's more code being developed every day. >> There is more code delivered every day. And I think that organization realize today, almost in every industry that they have to pace up how fast and how frequent they update their software delivery. We're living in a world in which every aspect of our life has been disrupted by software and organization realize that they have to keep up and figure out how to deploy software more frequent and more lively. And I think, you mentioned that really Kubernetes, the cloud native became the de facto way of running application. I think most of organization has made that decision to move into cloud native. The second question is after, is okay, now we have all applications running, how fast and how more frequent we can deploy applications to the cloud native? And that's the stage in which we're super excited about Argo and our up platform because that's basically streamline the building application for these cloud native, deploying applications for the cloud native, and so on. >> Yeah, and I think that highlights the business value. You getting a lot of the conversations with businesses that say they want the modern application on the cloud scale. And at the end of the day, it comes down to speed and security. So how fast can I get the app out? How well does it work? Does it run performance? And does it have security? And I don't want a slow. >> Exactly. Exactly. It kind of oversimplifies it, but that's kind of the net net. So when you look at Argo open source, what's that's done and kind of where you guys are taking it. Can you talk about the differences between your enterprise version and the open source version and the interplay there, the relationship, the business model health customers can play on both sides or understand the difference? >> Sure. >> Go ahead. >> Go ahead, Raziel. Okay, so I think Argo, as you mentioned, is probably the most advanced technology today to both run pipelines. They're like events to trigger pipelines and Argo work for the one that pipelines, the Argo CD for GitOps and Rollout, for Canary blue-green strategies. And the adoption is really exploding. Just as an Advocate that we had in December, we have worked with the community and organized ArgoCon events in which we had initially kind of thought about 500 attendees. And so we have more than 4,000 registrants and majority of them are coming from enterprise. Now as we have talked to the community during this conference and figure out, okay, so what are the things that you're still missing? And that will help you take the benefit that you get from Argo to the next level. The few things that came up. One is Argo is a great technology. However, Argo now is fragmented into four projects. There is an advance. There is workflow. There is Argo CD. And there is Argo Rollout. And there is a need to bring them all together into a solid platform, solid one run time that can be easily installed, monitor all of these in a single UI, in a single control plane. That's one aspect. The second is the scalability. Really being able to manage it centrally across multiple clusters, not in one cluster. And what we bring in with the new one, we're so excited about this platform, is we're bringing that big. The first to get all of these four projects in one runtime, and one control plane, but also allow the community to run it across multiple cluster from one place getting into the solution, not just as a technology. >> If I may add to that, the value of bringing these projects together, it provides so many insights. So when you're trying to figure out, there's some breaking change that has been made, but you don't necessarily know where it is because you have a lot of microservices that are out there. You have a lot of teams working on it. By bringing all of these things together, we're able to look at all of the commits, all of the deployments, all of the Jira issues. All of these components combined together, so you really get a single view where you can see everything that's going on. And this is another element where when you're trying to deploy software at scale, you're trying to deliver it faster. People are getting a little bit overwhelmed because there are so many updates and so many different services and so many teams working that they're starting to miss that visibility. So this is what we want to bring into the ecosystem is we really want them that visibility to be super clear. And by bringing all of the Argo components, the Argo tools together, we're able to do that in a single dashboard. >> Yeah, so if I get this right, let me just double click on that because it sounds like, yeah, Argo's great. It's been organically growing, a lot of different components to it, but when you start getting into pushing code in an organization, you have, I call the old-school version control kind of vibe going on where it's like you don't know what's out there and how that affects the system as it's a distributed system, which cloud is. There are consequences when stuff breaks. So we all know that. Is that kind of where you guys are getting at? The challenge is actually the opportunity at the same time where it's all goodness, but then when you start looking at scale and the system impact, is that kind of where the open source and you guys pick up, is that right? >> This is one aspect. I think the second one is that again, when you look at each individual component of Argo, each provide a lot of value by itself. But when you sum it, the value of the sum is greater than the value of the individual. So when you're taking, really the events and workflow, Argo CD and Argo Rollout, and you bring them all together into single runtime. The value of its time is really automation all the way from code to cloud. It's not breaking into, there is like an automation for CI, there's an automation for CD, there's information for progressive delivery. It's actually automated all the way from the Git commit through the GitOps through the deployment strategy, and so on. And being able to monitor it and scale it in the enterprise scale. So, of course, it's helping enterprise and make Argo to some level more crucial for enterprise, if I may say, but second is really bringing all of these components together and get the outcome be greater than the individual parts. >> Yeah, that's a good point. Yeah, make it make a commercial grade, if you will, for enterprise who wants to have support and consistency and whatnot. What other problems are you solving? Dan, can you chime in on the whole, how you guys resolve some of these challenges for the enterprise? Because, again, some stability is key as well, but also the business benefit has got to be there for the development teams. >> Yeah. So there's several. One aspect is that the way that most people operate today is they essentially do a bunch of commands and engage with systems. And then hopefully at the end, they write those things to Git. And this is a little bit backwards if you think about it because there's a situation where you can end up with things in production that were never checked in, or maybe somebody is operating and they're making a change. If we look at most of the downtime that's occurred over the last two years, it's because people have flubbed a key when they were typing in a command or something like that. The way that this system works is that we provide an interface, both the CLI and the GUI, where those operations interactions actually end with a Git commit. So rather than doing an operation and then hopefully committing to Git, most of the operations are actually done first in Git, or if there is something that can't be done first in Git, it's maybe bootstrapped and then committed to Git as part of a single command. So this means you have end-to-end traceability. It also means your auditability is way better. And then the second, the other component that we're adding is that security and scale layer. So we are securing these things, we're building in single sign-on, and all those robust security things you would expect to have across all these instances. So many organizations, when they're building their software delivery tools, they have to deploy instances in many locations. And so this is how you end up with companies that have 5,000 instances that are all out of date and insecure. Well with Codefresh, if you need to deploy a component onto this end cluster or something like that, you may have thousands of them. All of those are monitored and taken care of in a centralized way, so I can do all of my updates at once. I can make sure they're all up to date. I'm not running with a bunch of known CVEs or something like that and it's clear. The components are also designed in an architectural way. So that only the information that is needed is ever passed out. So I can have a cluster that is remotely managed, that checks out code, that the control plane never has access to. So this hybrid model has been really popular with our customers. We have customers in healthcare, we have customers in defense and in financial services, all these regulated industries. The flow of information is really critical. So this hybrid model allows you to deploy something that has the ease of a SaaS solution, but has the security of an on-prem solution while being centrally managed and easy to take care of. >> Yeah, it's a platform. It's what it is. It's not a tool. It's not a tool anymore. It's a platform. >> Exactly. >> I think the foundational aspect of this is critical. And you mentioned automation before. If you're going to go end-to-end automation, you have some stuff in the system that whether it hasn't been checked in yet. I mean, we know what this leads to. Disaster or a lot of troubleshooting and disruption. That's what it seems to solve. Am I getting that right? Is that right? >> Yeah. >> Go ahead. >> Yeah, it helps automate the whole process. But as you say, it's really like identify what needs not to be going all the way to production and really kind of avoid vulnerabilities or any flaws in the software. So it automates everything, but in a way that the automation can identify issues and avoid them from coming into the production. >> Well, great stuff here. I've got to ask you guys now that you've got that settled. It's really, I see the value there, how you guys are letting it grow organically and with Argo and then building that platform for businesses and developers. It's really cool. And I see the foundational value there. It just only gets better. How you guys contributing back to open source and helping the wider GitOps and Argo communities? Because this is, again, the rising tide that's bringing all the boats into the harbor, so to speak. So this is a good trend and people will acknowledge that. So how's this going to work as you guys work back into the open source community? >> So we work closely with both myself and the other maintainers worked closely with the community on the roadmap and making sure that we're addressing issues. I think if you look in the last quarter, we probably have upwards of 40 or 50 different issues that we've solved in terms of fixing a bug or adding features or things like that. So making sure that these tools, which are really the undergirding components of our platform, they have to be really robust. They have to be really strong. And so we're contributing those things back. And then when it comes to the scalability side, these are things that we can build into the platform. So the value should be really clear. I can deploy this, I can manage it myself, I can build tools on top of it. And if I want to start doing it at scale, maybe I want support. That's when I really am going to go to Codefresh and start saying, let's get the enterprise little platform. >> Awesome. GitOps, a lot of people like some naysayers may say, Hey, it's the latest fad. Is it here to stay? We were talking about big code earlier. GitOps, obviously seeing open source. Just every year, just get better and better and growth. I mean, I remember when I was breaking into the business, you have to sell under the table. Now it's all free and open and getting better every year. Just the growth of code. Is GitOps a fad? How do you talk to people who say that? I mean, besides slapping around saying wake up. I mean, how do you guys address that when people say it's just the latest fad? >> So if I may comment here and Dan feel free to chime in, I think that the GitOps is a continuation of a trend that everything is a source code. As a developer, many years ago myself and still writing code, always both code and code was the source of tool that's where we write the code. But now code actually is also describing how our application is running in production. And we've already seen kind of where it's get next. We also hear about infrastructure as a code. So now actually we storing the code the way the infrastructure should be. And I think that the benefit of storing all this configuration in a source code, which has been built to track changes, to be enabled to roll back, that is just going to be here to stay. And I think that's the new way of doing things. >> All right, gentlemen, great. Closing statements. Please share an update on the company. What it's all about? What event you got coming? I know you got a big launch. Can you take us through? Take us home. >> Join on February 1st, we're going to be launching the Codefresh software delivery platform. Raziel and I will be hosting the event. We've got a number of customers, a number of members of the community who are going to be joining us to show off that platform. So you're going to be able to see it in action, see how the features work, and understand the value of it. And you'll see how it works with GitOps. You'll see how it helps you deliver software at scale. That's February 1st. You can get information at codefresh.io. >> Raziel, Dan, thanks for coming on. >> Thank you. >> Pretty good showcase. Thanks for sharing. Congratulations. Great venture. Loved the approach. Love the growth in cloud native and you guys sure on the cutting edge. Fresh code, people love fresh code, codefresh.io. Thanks for coming on. >> Thank you. Thank you. >> Okay, this is the AWS Startup Showcase Open Cloud Innovations. Cloud scale, software, data. That's the future of modern applications being developed, changing the game to the next level. This is the CUBE's coverage season two episode one of the ongoing AWS Startup series here in theCUBE.

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase and you guys at Codefresh Argo the project, and why becoming kind of the de facto way And when you have and planning the events around and opportunities that you saw with those and making sure that the And at the beginning where And that's the stage in which You getting a lot of the and the open source version but also allow the community to run it all of the deployments, and how that affects the system and scale it in the enterprise scale. for the enterprise? One aspect is that the way Yeah, it's a platform. And you mentioned automation before. all the way to production And I see the foundational value there. and the other maintainers worked it's just the latest fad? the way the infrastructure should be. I know you got a big launch. a number of members of the community and you guys sure on the cutting edge. Thank you. changing the game to the next level.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan GarfieldPERSON

0.99+

Dave VellantePERSON

0.99+

JohnPERSON

0.99+

BrianPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

VishalPERSON

0.99+

John FurrierPERSON

0.99+

BostonLOCATION

0.99+

Brian LazearPERSON

0.99+

CiscoORGANIZATION

0.99+

DecemberDATE

0.99+

February 1stDATE

0.99+

JuniperORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Vishal JainPERSON

0.99+

fiveQUANTITY

0.99+

AWSORGANIZATION

0.99+

FortinetORGANIZATION

0.99+

Raziel TabibPERSON

0.99+

RazielPERSON

0.99+

GitTITLE

0.99+

ValtixPERSON

0.99+

Twenty peopleQUANTITY

0.99+

ArgoORGANIZATION

0.99+

twenty peopleQUANTITY

0.99+

two guestsQUANTITY

0.99+

14 millionQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

last weekDATE

0.99+

5,000 instancesQUANTITY

0.99+

third optionQUANTITY

0.99+

CodefreshORGANIZATION

0.99+

TodayDATE

0.99+

DanPERSON

0.99+

ValtixORGANIZATION

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

second questionQUANTITY

0.99+

thousandsQUANTITY

0.99+

more than 4,000 registrantsQUANTITY

0.99+

second thingQUANTITY

0.99+

40QUANTITY

0.99+

EnvoyORGANIZATION

0.99+

One aspectQUANTITY

0.99+

bothQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

one aspectQUANTITY

0.99+

oneQUANTITY

0.99+

last quarterDATE

0.99+

secondQUANTITY

0.99+

third thingQUANTITY

0.99+

two core enginesQUANTITY

0.99+

both optionsQUANTITY

0.99+

three core elementsQUANTITY

0.98+

fourQUANTITY

0.98+

Buddy Brewer, New Relic | AWS re:Invent 2021


 

(upbeat music) >> Welcome back to theCUBE's continuous coverage of AWS re:Invent 2021 I'm Lisa Martin. This is our third day here on set We've got two live sets, two remote studios, over a hundred guests on the program and a lot going on with AWS and its ecosystem of partners am pleased to welcome back one of our Cube alumni, Buddy Brewer, the GVP & GM of product partnerships at New Relic. Welcome back, Buddy. Good to have you. >> Thanks it's great to be here >> Great to be in an in-person event isn't? >> No kidding it's really amazing to see everybody out here and after spending so much time on zoom calls, we had a lot of really great moments among the team and the booth playing the game of seeing if people's height matched up with >> (laughs) >> What your expectation was because so many of the people we work with >> Never mind. >> We've only known over zoom. >> Yes ,and zoom has been a savior for all of us we've been doing so much recording on zoom at the same time it's great to be here in person and seeing what a safe job AWS has done with getting I from hearing upwards of 30,000 people in here that are here in person. So talk to me about you lead the technology partnerships at New Relic. Talk to me about your role, and then we'll get into the partnership with AWS. >> Yeah, absolutely. Well, you know, the point about zoom, it's fascinating. Like you said, that just having the ability to communicate with people has been such a key enabler of being able to make progress and to continue to lead our personal and our professional lives despite the pandemic I mean, imagine what it would have been like if this had happened 10 years ago, even, but certainly 50 years ago >> Right. or something like that, right? Like everything would have ground to a halt and technology took on such an amazing, you know, critical role in allowing us to do all of these things and so at New Relic, we're all about helping people make sure that all of this software works correctly. And so observability helps people understand the detail level about everything from the front end, the end user experience to every single piece that happens along the path of delivering that experience all the way down to the infrastructure into the network. But my role at New Relic is also to help all of the other tools that software developers use every day to create those experiences that they connect into their observability platform so that they can understand all of those details and make sure that people are able to continue doing things that have become really so basic to life like ordering groceries or getting food, or, you know, communicating with a loved one over something like zoom. >> Yeah the things that to your point, if this had happened, you know, five, 10 years ago, it would have been a completely different story. We've been able to function really well and one of the things too, that, you know, I noticed yesterday and today, you probably did as well with the plethora, typical AWS the plethora of announcements, the amount of innovation that's going on, the customer flywheel that we've just seen this acceleration of technology and what it's enabling, but the observability portion is really key you talk about, you know, the developers need to the whole SDLC they need to be able to understand exactly what's going on because at the end of the day, whether it's a consumer or an enterprise of the other end of the spectrum, we need to know exactly what's going on because people's patience is far thinner these days the pandemic showed is that there is really no having access to real-time data. Isn't a luxury anymore it's really a necessity. >> Right, yeah, absolutely. >> Talk to me about some of these so a lot of announcements coming up from AWS, you guys talk to me about the partnership, what you guys are doing there. And some of the things that are exciting on that front. >> Yeah, AWS is a really key partner for us. We're big users of AWS ourselves for our observability platform and all of our infrastructure and, you know, we've had our own journey as a 13 year old business that started out pre cloud and moving our own infrastructure to the cloud. And then along that journey, we've worked closely with AWS and we've built a lot of joint solutions to help people who are moving to the cloud themselves or who are cloud native to understand all of the details about what's happening in that software so we have over 60 different integrations to all of the different tools with Amazon that you can use on the cloud from data storage, to EKS on Fargate and all of that stuff. And then we recently announced a five-year strategic agreement with Amazon to make it even easier for customers to adopt New Relic if they're building in Amazon AWS and so you know, we're in their marketplace, we have an offering for startups, for people who are just getting started that, you know, provides really simple and fast on-ramps with discounts and things like that. That's all designed to help people, software developers in particular, focus on what matters most to them, which is building great experiences for their customers. You know, you mentioned that the SDLC and this is one of the things that, you know, our mission at New Relic is to make observability a daily data-driven habit for developers across all phases of the software delivery life cycle. The problem with observability and how it's used today is that it's only used in the run phase by most people they use it when the software is on fire to put the fire out we believe that, that telemetry has tremendous strategic value in the plan, build and deploy phases of software development as well. And so partnerships like AWS allow us to unlock the accessibility of that data across all of those different phases for people who software developers are as a result in many ways that the things that we were talking about earlier with the expectations that the pandemic has placed on how software has to work, it's not an option they're busier, they're under more pressure than they've ever been before and so we want to help them relieve that pressure with tools that help them do their jobs better. >> Relieving that pressure is key there is so much pressure on developers I mean, these days from observability to security and that sort of thing, but it sounds like one of the things that you're also fundamentally doing is really shifting that observability left and helping them from a cultural perspective, it seems like almost a shift, but you're trying to make things easier for them giving them more tools and to unlock what they're not seeing right now. >> That's right and you know, the interesting thing about it is everyone realizes that observability is critical to, you know, successful software businesses so for example, we did a survey recently of 1300 software developers and IT decision makers and executives, and found that among the C-level executives that were surveyed 80% of them expected to increase their observability budget and 20% of those expected to increase it significantly. However, that same survey found that a very small percentage of those who we actually surveyed feel that they have a mature observability practice today. And when we unpack the reasons why in the survey, we found that most of them reduce down to basically this issue of they just don't have enough time to instrument all of the software, especially in a world where the shift to the cloud has driven a change in architecture where monoliths have been torn down and replaced by hundreds, or may be even thousands of microservices. >> Right. >> And we're in an era now where if observability isn't really, really easy and incredibly fast and simple to execute on then software developers can no longer instrument fast enough to keep up with the pace of the software that they're delivering and so what that leads to is visibility gaps, visibility gaps lead to poor customer experiences. And so what we're trying to do, and we've been on this massive simplification of our own platform to make it, you know, incredibly cost-effective at just 25 cents a gigabyte for ingestion and really simple licensing seat based licensing, where you get access to all of our tools to make it really simple and to take simply minutes to get observability on all those different pieces. >> If simplicity is a word that we throw around a lot, but it's really critical element and it's interesting to understand how do you actually facilitate that? You talked about, you know, kind of the 80 20 rule there. >> Yeah. >> A lot of the organization's not on that maturity curve with observability, how does New Relic and its ecosystem of partners like AWS how do you help have those conversations within organizations in any industry tell them, understand how you can actually simplify that and unlock that visibility, knowing that it's not only a matter of software development, but it's a competitive differentiator. It's also something that can damage a brand if they're not top of it. >> Yeah, we launched a re-imagined version of our partner ecosystem really our entire integration ecosystem about six weeks ago on October 13th called New Relic Instant Observability. And one of the central goals of New Relic IO, which we call it for short is to make it take just like five minutes for people to instrument something. So in the old way, what people had to do is if they wanted observability, they had to go learn about an observability vendor then they had to go install it, figure out how all that works and then they could get to solving their problem, which might've just been simply instrumenting a Kafka you know and so what we want to do is just keep people in that mode if all you wanted to do is instrument Kafka, then go find the Kafka instrumentation tile on New Relic and observability and then there's a guided install process that takes you through that and at the end you've instrumented Kafka and if you want to add something else like EKS Fargate from Amazon, or if you want to add something else like a Java service, you can simply click more of those guidance installs and add within minutes in an incremental way without having to stop and do a whole vendor evaluation to do so in fact, one of the other things that we launched recently is a free tier that's free forever. So there's no trial process or anything you don't have to put in a credit card if all you want to do is instrument this one thing right now, you can go through this process provision a free account you get access to all of our functionality for one user and ingest up to a hundred gigabytes of telemetry data for free within minutes. And so what we're trying to do is take all of that adoption friction out so that people aren't fighting with their instrumentation so much, and again, they can get back to doing what they really want to do in the first place, which has built great experiences for their end users. >> Great experiences for the end users but that translates to employee experience that translates to an end user customer experience, which translates back to brand reputation. I'm just wondering, you know, you're focused on the developers and we've been hearing a lot about the last two and a half days, a big focus on developers has observability kind of escalated up and its evolution up the stack within organizations is this a C-suite concern? Is this a board level concern? where does this fit now? and what's the vision of New Relic to deliver on that? >> With observability? >> Yes. >> Yeah, 90% of those in the survey that I was talking about felt that observability was not just a tool that they needed to use, but strategically critical to their business and, you know, this goes back to, as we know, and especially as a result of the intensity on the importance of software coming out of the pandemic, your digital business is your business these days. And so if you don't understand what's happening in that software and you can't move quickly, then you know you're really in trouble in terms of trying to succeed in a highly competitive environment and that goes back to again, one of our core beliefs is that all of this telemetry data that people have been collecting about how their software operates is so useful in contexts outside of just when there's a problem in production. Imagine if you could take that information and you could actually put it inside the IDE, which is something that we did with a recent acquisition of a company called CodeStream. We can take this telemetry data and put it inside the IDE so that as developers are writing the software, they know where those issues are. You can click straight from a stack frame, for example, inside of our, where we show all of our errors in a capability called Error's inbox and shoot right into your IDE and go see where the line of code is that caused that error, shortening that feedback loop and unlocking this really big investment that a lot of companies make in telemetry data earlier in the software life cycle, we believe is the future of observability and we want to help people get there. >> Well, the observability is really key for organizations these days because we've been hearing every company these days has to be a data company. >> Yeah. >> And it's one thing to say that it's a whole other thing to be able to implement it and observability is absolutely critical to that as being able to take that data and apply it in different contexts to really enable that business to be digital which is absolutely table-stakes these days to be successful and to deliver that customer experience ultimately. >> Yeah. >> That's what it all do. >> Yeah, absolutely. And you know, the other thing is really hard about this problem when I talk with our customers and we found this in the survey as well, is that, you know, software developers, don't just use one tool to create software they use a lot of tools in fact, 13% of those that we surveyed use 10 or more tools. >> Whoa. >> Just for the observability piece. And so, you know, obviously we're always trying to expand organically what we do inside of our platform to cover more and more use cases, but an equally important part of our strategy, if we really want to make observability a data-driven daily habit for people is to find all of those other, you know, really well-built amazing tools that those developers use and find valuable ways to integrate with them. And so that's the other part of our ecosystem that we've built out is this ability to take all of the other tools that you use and wire them into New Relic so that, for example, if you're using, let's say Lacework for security then you can, you know, if someone's installed a Bitcoin miner on your infrastructure somewhere, you can quickly navigate because of that integration from a poor customer experience through the infrastructure that's suffering may be with, you know, a lot of memory pressure, and a lot of CPU being used for this Bitcoin miner and then find out that, you know, through the integration where the miner was installed, how it got installed so that you can remediate those types of issues and connecting those pieces together, making software truly interoperable is another thing that's really critical to our mission at New Relic. >> It is critical to not only to the developers, but to the organizations and their success as businesses these days Buddy thank you for joining me, talking about what's going on at New Relic What's new, how you're really empowering those developers and all of the downstream positive effects that, that leads to we appreciate your time. >> Thank you ,thanks for having me. >> All right, you are Buddy Brewer I'm Lisa Martin you're watching theCUBE, the global leader in live tech coverage. (soft music)

Published Date : Dec 1 2021

SUMMARY :

and its ecosystem of partners So talk to me about you lead just having the ability to that experience all the way down and one of the things too, that, you know, Talk to me about some of these Fargate and all of that stuff. and to unlock what they're and 20% of those expected to and to take simply minutes and it's interesting to understand A lot of the organization's not on and if you want to add something else Relic to deliver on that? and that goes back to again, these days has to be a data company. that business to be digital is that, you know, software developers, and then find out that, you know, It is critical to not the global leader in live tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

New RelicORGANIZATION

0.99+

10QUANTITY

0.99+

five-yearQUANTITY

0.99+

AmazonORGANIZATION

0.99+

80%QUANTITY

0.99+

20%QUANTITY

0.99+

five minutesQUANTITY

0.99+

13%QUANTITY

0.99+

two live setsQUANTITY

0.99+

Buddy BrewerPERSON

0.99+

hundredsQUANTITY

0.99+

todayDATE

0.99+

90%QUANTITY

0.99+

October 13thDATE

0.99+

thousandsQUANTITY

0.99+

third dayQUANTITY

0.99+

two remote studiosQUANTITY

0.99+

yesterdayDATE

0.99+

25 centsQUANTITY

0.99+

oneQUANTITY

0.99+

JavaTITLE

0.99+

BuddyPERSON

0.98+

over 60 different integrationsQUANTITY

0.98+

30,000 peopleQUANTITY

0.98+

CodeStreamORGANIZATION

0.98+

one thingQUANTITY

0.97+

1300 software developersQUANTITY

0.97+

50 years agoDATE

0.96+

over a hundred guestsQUANTITY

0.96+

pandemicEVENT

0.96+

one toolQUANTITY

0.96+

one userQUANTITY

0.96+

KafkaTITLE

0.95+

FargateORGANIZATION

0.94+

13 year oldQUANTITY

0.93+

10 years agoDATE

0.93+

EKSORGANIZATION

0.93+

firstQUANTITY

0.92+

New RelicORGANIZATION

0.9+

six weeks agoDATE

0.86+

CubeORGANIZATION

0.85+

up to a hundred gigabytesQUANTITY

0.8+

every single pieceQUANTITY

0.79+

80 20OTHER

0.78+

re:Invent 2021EVENT

0.72+

five,DATE

0.71+

thingsQUANTITY

0.68+

BitcoinOTHER

0.67+

2021DATE

0.65+

EKS FargateTITLE

0.65+

much timeQUANTITY

0.64+

Avishek and Richard V2


 

>> Welcome everybody to this cube conversation. My name is Dave Vellante and we're joined today by Richard Goodwin, who's the group director of IT at Ultraleap and Avishek Kumar, who manages Dell's Power Store, product line, he directs that product line along with several other lines for the company. Gentlemen, welcome to the cube. >> (Avishek) Hi Dave. >> (Richard) Hi >> (Dave) So Richard, Ultraleap, very cool company tracks hand movements, and so forth. Tell us about the company and the technology I'm really interested in how it's used. >> Yeah, we've had many product lines, obviously. We're very innovative, and the organization was spun up from a PhD, a number of PhD students who were the co-founders for Ultraleap, and initially with mid-air haptics, as you, as many people may have seen, but also hand tracking, mid-air touch, sense and feel. So, yeah, it's, it's, it's quite impressive what we have produced and the number of sectors and markets that we were in. And obviously to, to push us to where we are, we have relied upon lots of the Dao technology, both software and hardware. >> (Dave) And what's your role at the company? >> I'm the group IT director, I'm responsible for the IT and business platforms, all infrastructure, network, hardware, software, and also the transition of those platforms to ensure that we're scalable. And we are able to develop our software and hardware as rapidly as possible. >> (Dave) Awesome. Yeah, a lot of data behind that too I bet. Okay Avishek, you direct a number of products at Dell across the portfolio, Unity, Extreme IO, the SC series, and of course power vault. It's quite the portfolio that you look after. So let's get into the case study, if we can, a bit, Richard, maybe you could paint a picture of, of your environment, some of the key applications that you're supporting and maybe what your infrastructure looks like. Give us a high level view. >> Sure. So, pre Power Store, we had quite a disparate architecture, so a fairly significant split and siding on the side of the cloud, not as hybrid as we would like, and not, not as much as on-prem, as we would have liked, and hey, but that's changed quite significantly. So we now have a number of servers and storage and storage arrays that we have on, on-premise, and then we host ourselves. So we are moving quite rapidly, you know as a startup and then moving to a scale-up, we needed that, that scalability and that versatility, and also the whole OPEX versus CAPEX, and also not being driven by lots of SaaS products and architecture and infrastructure, where we needed to be in control because of our development cycles and our products, product development. >> (Dave) So wait, Okay, So, so, too much cloud. I'm hearing you wanted a little bit a dose of on-prem, explain that a little bit more, the cloud wasn't doing it for you in terms of your development cycle, your control. Can you double click on that? >> Yeah. Some of the, some of the control and you know, there's always a balance because there's certain elements of our development cycles and our engineering, software engineering, where we need a very high parallelism for some of the work that we're doing, which then, you know, the CAPEX investment makes things very, very challenging, not commercially the right thing to do. However, there are some of our information, some of IP, some of the secure things that we do, we also do not want upgrades as an example, or any advantages or certain types of server and spec that we need to be quite and unique and that needs to be within our control. >> (Dave) Got it, Okay. Thank you for that. Avishek, we're going to talk about Power Store today. So set it up, please, tell us about Power Store, what it is, you know, why it's important to this conversation. >> Sure. So Power Store is a product that we launched may of 2020, roughly a little bit more than a year now. And it's a brand new architecture that Dell technologies released. And at the end of the day, I'll talk about a few unique aspects of the product, but at the end of the day, where we start with, it's a storage platform, right? So where we see similar to what Richard is saying here, in terms of being able to consolidate the customer's environment, whether it is blog, file, WeVaults, physical, virtual environments, and, and it's, as I said, it's a brand new architecture where we leveraged pieces of existing products, where it made sense, we are using all the latest and greatest technologies delivering the best performance based data reduction. And where we see a lot of traction is the options that it brings to the table for our customers in terms of flexibility, whether they want to add capacity, compute, whether in fact, we have apps on the deployment model where customers can consolidate their compute as well on the static storage platform with needed. So a lot of innovation from a platform perspective itself, and it's not just about the platform itself, but what comes along with it, right? So we refer to it as an ecosystem, part of it, where we work with Ansible playbooks, CSI plugin, you name it, right. And it's the storage platform by itself, doesn't stand by itself in a customer's environment, there are other aspects of the infrastructure that it needs to integrate with as well. Right? So if they are using Ansible playbooks, we want to make sure the integration is there. >> (Dave) Got it. >> And last, but perhaps not the least is the intelligence built into the platform, right? So as we are building these capabilities into the product, there is intelligence built into the product, as well as outside the product where things like Cloud IQ, things like technologies built into power suit itself makes it that much easier for the customers to manage the infrastructure and go from there. >> (Dave) Thank you for that, So, Richard, what was the workload? So it actually, you started with the sort of a Greenfield on-prem. If I understand it correctly, what was the workload that you were sort of building around or workloads? >> So, we had a, a number of different applications. Some of which we cannot really talk about too much, but we had, we had a VxRail, we had a a smaller doubt array and we have lots of what we class as runners, Kubernetes cluster that we run and quite a few different VMs that run on our, on-prem server infrastructure and storage arrays and the issues that we began to hit because of the high IO, from some of our workloads, that we were hitting very high latency, which rapidly stopped, began to cause us issues, especially with some of our software engineering teams. And that is when we embarked upon a competitive RFP for Dell Power Store, Dell were already engaged from an end-user compute where they'd been selected as the end-user compute provider from a previous competitive RFP. And then we engaged them regarding the storage issue that we had and we engaged the, our account lead and count exec, and a number of solution architects were working with us to ensure that we have the optimal solution. Dell were selected over the competitors because of many reasons, you know, the new technology, the de-duplication, the compression, the data, overall data reduction, and the guarantee that also came, came with that, the four-to-one data reduction guarantee, which was significant to us because of their amounts of data that we hold. And we have, you know, as I've mentioned, we're pulling further, further data of ours back into our hosted environments, which will end up on the Power Store, especially with the de-duplication that we're now getting. We've actually hit nine-to-one, which is significant. We were expecting four-to-one, maybe five-to-one with some of the data types. And what was excellent that we were that confident that they did not even review our data types prior, and they were willing to stand by that guarantee of four-to-one. And we've excelled that, we've got significant different data types on, on that array, and we've hit nine-to-one and that's gradually grown over the last nine months, you know, we were kind of at the six then we moved to seven and now we're hitting nine-to-one ratio. >> (Dave) That's great. So you get a little free storage. That's interesting what you're saying, Richard, cause I just assumed that a company that guaranteed four-to-one is going to say okay, let us, let us inspect your workload first and then we'll do the deal. So Avishek, what's the tech behind that data reduction that you're able to, with such confidence, not have to pre inspect the workload in this case anyway. >> Yeah. So, it goes back to the technologies that goes behind the product, right? So, so we, we stand behind the technology and we want to make it simpler for our customers as well where, again we don't want to spend weeks looking at all the data, scanning all the data before giving the guarantee. So we stand behind the technology where we understand that as the data is coming in, we are always going to be de-duplicate it. We are always going to compress it. There is technology within the product where we are offloading some of that to the outside the CPU, so it is not impacting the performance that the applications are going to see. So a data reduction by itself is not good enough, performance by itself is not good enough. Both of them have to be together, right? So, and that's what Power Store brings to the table. >> (Dave) Thank you. So Richard, I'm interested. I mean, I remember the Power Store announcement of, sort of, saw it leading up to it. And one of the big thrusts from Dell was the way I phrase it is essentially trying to create a cloud like experience on-prem. So really focused on simplicity. So my question to you is, let's start with just the deployment. You know, how complicated was it to install? What was that process like? How many clicks, I mean, not that you have to tell me how many clicks, but you know, what I'm asking is, is how difficult was it to get from zero to, you know, up and running? >> Well, we actually stepped our very difficult challenge. We were in quite a difficult situation where we'd pretty much gone off the cliff in terms of our IOPS performance. So the RFP was quite rapid, and then we needed to get whoever which vendor was successful, we needed to get that deployed rather rapidly and on the floor in our data center and server rooms, which we did. And it was very very simplistic, within three weeks of placing the order, we had that array in our server rack and we'd begun the migration, it was very simple to set up. And the management of that array has been, we've seen say 40% reduction in terms of effort to be able to manage our storage because it is very self-contained, you know, even from a reporting perspective, the deployment, the migration was all very, very, very simplistic, and you know, we we've done some work recently where we had to also do some work on the array and some other migrations that we were doing and the resilience came, came to, came to the forefront of where the Juul architecture and no single point of failure enabled us to do some things that we needed to do quite rapidly because of the, the Juul norms and the resilience within, within the unit and within the Power Store itself was considerable where we, we kept performance up, it also prioritize any discreet rebuilds, keeps the incoming ingest rates high, and prioritizes the, you know, the workloads, which is really impressive, especially when we are moving so quickly with our technology. We don't really have much time to, you know, micromanage the estate. >> (Dave) Can you, can you just repeat what you said on the percent reduction? I think I heard you cut out there a little bit, a percent reduction on, on, on management, on, on, on the labor side. >> So our lead storage engineer is estimated around 40% less management. >> (Dave) Wow. Okay. So that's, that's good. So actually, I love this conversation because, you know, in the early days of automation, people like, ah, that's my job, provisioning LUNs. I'm really good at it, but I think people are realizing that it's actually not something that you want to be really good at. It's something that you want to eliminate. So, it now maybe it's that storage engineer got his or her nights and weekends back, but, but what do they do now when they get that extra time, what do you, what do you put them on? You know, no more strategic initiatives or, you know, other, other tech things on the to-do list. What's that like?. >> The last thing that, you know, any of my team, whether it's the storage leads or some of the infrastructure team that were also involved in engaged, cause you know, the organization, we have to be quite versatile as a team in our skillsets. We don't want to be doing those BAU mundane tasks. Even the storage engineer does not want to be allocating LUNs and allocating storage to physical servers, Vms, etc. We want all of that to be automated. And, you know, those engineers, they're working on some of the cutting edge things that we're trying to do with machine learning as an example, which is much more interesting. It's what they want to be doing. You know, that aides, the obvious things like retention, interest and personal development, we don't want to be, you know, that base IT infrastructure management, is not where any of the engineers wants to be. >> (Dave) In terms of the decision to go with Dell Power Store. I'm definitely hearing there was a relationship. There was an existing relationship with Dell. I'm sure that played into it. >> There were many things. So the relationship wasn't really part of this, even though I've mentioned the end-user compute in any sets or anything that we're procuring, we want best of breed, you know, best of sets. And that was done on, the cost is definitely a driver. The technology, you know, is a big trust to us, We're a tech company, new technology to us is also fascinating, not only our own, but also the storage guarantee, the simplicity, the resilience within, within the unit. Also the ability, which was key to us because of what we're trying to do with our hybrid model and bring, bring back repatriate some of the data as it were from the client. We needed that ability to, with ease, to be able to scale up and scale high, and the Power Store gave us that. >> (Dave) When you say cost, I want to dig into that price or you know, the price tag or the, the cost, I mean, when you do the business case. And I wonder if we could add a little color to that. >> (Richard) There's two elements to this, so they're not only the cost of the price tag, but then also cost of ownership and the comparisons that we were running against the other vendors, but also the comparisons that we were running from a CAPEX investment against OPEX and what we have in the cloud, and also the performance, performance that we get from the cloud and our cloud storage and the resilience within that. And then also the initial price tag, and then comparing the CapEx investments to the OPEX where all elements that were key to us making our decision. And I know that there has to be some credit taken by the Dell account team and that their relationship towards the final phrase of that RFP, you know, were key initially, not all, we were just looking for the best possible storage solution for Ultraleap. >> (Dave) And to determine that on your end, was that like a feature, because it's sometimes fuzzy what the business impact is going to be like that 40% you mentioned, or the data reduction at nine to one, when there's a promise of four to one, did you, what did you do? Did you kind of do a feature function analysis and sort of line that up and, and say, okay, I'm going to map that to our business processes our IT processes and try to predict what the impact would be. Is that how you did it? or did you take a different approach? >> (Richard) We did. So we did that, obviously between vendors usually expected an RFP, but then also mapping to how that would impact the business. And that is not an easy process to go through. And we've seen more gains even comparing one vendor to another, some of that because of the technology, the terminology is very very different and sometimes you have to bring that upper level and also gain a much more detailed understanding, which at times can be challenging, but we did a very like-for-like comparison and, and also lots of research, but you're quite right. The business analysis to what we needed. We had quite a good forecast and from summarized stock information data, and also our engineering and business and strategic roadmap, we were able to map those two together, not the easiest of experiences, not one that I want to repeat, but we, we got it. (Dave laughing) >> (Dave)Yeah, a little bit of art and science involved. Avishek, maybe you could talk about Power Store, what, you know, give us the commercial. What makes it different from other products in the market of things like cloud IQ? Maybe you could talk about that a little bit. >> Sure. So, so again, from a, it's music to my ears, when Richard talks about the ease of deployment and the management, because there is a lot of focus on that. But even as I said earlier, from a man technology perspective, a lot of goodness built-in, in terms of being able to consolidate a customer's environment, onto the platform. So that's more from a storage point of view that give the best performance, give the best data reduction, storage efficiencies. The second part, of course, the flexibility, the options that Power Store gives to the customers in terms of sort of desegregating the storage and the compute aspects of it. So if, as a customer, I want to start with different points in terms of what our customer requirements are today, but going forward as the requirements change from a compute capacity perspective, you can use a scale up and scale out capabilities, and then the intelligence built in, right? So, as you scale out your cluster, being able to move storage around right, as needed being able to do that non-disruptively. So instead of saying that Mr. Customer, your, your storage is going to you're at 90% capacity, being able to say that based on your historical trending, we expect you run out of capacity in six months, some small things like that, right? And of course, if the, the dial home, the support assist capabilities that enabled, cloud IQ brings a lot of intelligence to the table as well. In addition to that, as they mentioned earlier, there is apps on capability that gives another level of flexibility to the customers to integrate your storage infrastructure into a virtual environment, if the customer chooses to do that. And last but not the least, it's not just about the product, right? So it's about the programs that we have put around it, anytime upgrade is a big differentiator for us, where it's an investment protection program for customers, where if they want to have the peace of mind, in terms of three months, nine months, three years down the line, if we come out with new technologies, being able to be upgrade to that non-disruptively is a big part of it as well. It's a peace of mind for the customers that, yes I'm getting into the Power Store architecture today, but going forward, I'm protected from that point of view. So anytime upgrade, it's a new business program that we put around leveraging the architectural benefits of Power Store, whether your compute requirement, your storage requirements change, you're covered from that point of view. So again, a very quick overview of, of what Power Store is, why it is different. And again, that's where that comes from. >> (Dave) Thank you for that. Richard, are you actively using cloud IQ? Do you get the, what kind of value do you get from it? >> Not currently. However, we have, we have had plans to do that. The uptake and BCR, our internal Workload is not allowed us, to do that. But one of the other key reasons for selecting Power Store was the non-disruptive element, you know, with other SaaS products, other providers, and other issues that we have experienced. That was one, that was a key decision for us from a Power Store perspective. One of the other, you know, to go back to the conversation slightly, in terms of performance, we are getting, getting there. You know, there's a 400% speed of improvement of publishing. We've got an 80% faster code coverage. Our firmware builds a 1300% quicker than they were previously. and the time savings of the storage engineer and, you know, as a director of IT, I often asked for certain reports from, from the storage array, we're working at, for storage forecast, performance forecast, you know, when we're coming close to product releases, code drops that we're trying to manage, the reporting or the Power Store is impressive. Whereas previously my storage engineer would not be the, the most happiest of people when I would be trying to pull, you know, monthly and quarterly reports, et cetera. Whereas now it's, it's ease and we have live dashboards running and we can easily extract that information. >> (Dave) I love that because, you know, so often we talk about the 40% reduction in IT labor, which okay, that's cool. But then your CFO's going to say, yeah, but it's not like we're getting rid of people. We, you know, we're still spending that money and you're like, okay. You're now into soft dollars, but when you talk about 400%, 80%, 1300% of what you're talking about business impact and that's telephone numbers to a CFO. So I love those metrics. Thank you for sharing. >> Yeah. But what would, they obviously, it's sort of like dashboards when they visualize that they are very hard hitting, you know, the impact. You're quite right the CFO does chase down you know, the availability and the resource profile, however, we're on a huge upward trajectory. So having the right resilience and infrastructure in places is exactly what we need. And as I mentioned before, those engineers are all reallocated to much more interesting work and, you know, the areas that will actually drive our business forward. >> (Dave) Speaking of resilience, are you doing any replication? >> Not currently. However, we've actually got a meeting regarding this today with some of the enterprise and some of their storage specialists, in a couple of hours time, actually, because that is a very high on the agenda for us to be able to replicate and have a high availability cluster and another potentially Power Store need. >> (Dave) Okay. So I was going to ask you where you want to take this thing. I'm hearing, you're looking at cloud IQ, really try to exploit that. So you got some headroom here in terms of the value that you can get out of this platform to do replication, faster recovery, et cetera, maybe protect against, you know, events. Guys, Thanks so much for your time. Really appreciate your insights. >> (Richard) No problem. >> (Avishek) Thank you. >> And thank you for watching this cube conversation. This is Dave Vellante and we'll see you next time.

Published Date : Oct 14 2021

SUMMARY :

lines for the company. and the technology and markets that we were in. and also the transition So let's get into the case and siding on the side of the the cloud wasn't doing of the control and you know, you know, why it's important of the infrastructure that And last, but perhaps not the least is what was the workload that you regarding the storage issue that we had not have to pre inspect the that the applications are going to see. And one of the big thrusts from Dell was and the resilience came, came to, on the labor side. So our lead storage engineer It's something that you You know, that aides, the (Dave) In terms of the decision to go and the Power Store gave us that. the price tag or the, the cost, and the comparisons that we or the data reduction at nine to one, because of the technology, other products in the market that give the best of value do you get from it? One of the other, you know, (Dave) I love that because, you know, and the resource profile, the agenda for us to be able in terms of the value that you And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

Dave VellantePERSON

0.99+

Richard GoodwinPERSON

0.99+

40%QUANTITY

0.99+

DellORGANIZATION

0.99+

80%QUANTITY

0.99+

1300%QUANTITY

0.99+

400%QUANTITY

0.99+

DavePERSON

0.99+

nine monthsQUANTITY

0.99+

90%QUANTITY

0.99+

three monthsQUANTITY

0.99+

three yearsQUANTITY

0.99+

UltraleapORGANIZATION

0.99+

sevenQUANTITY

0.99+

two elementsQUANTITY

0.99+

nineQUANTITY

0.99+

BothQUANTITY

0.99+

twoQUANTITY

0.99+

Power StoreORGANIZATION

0.99+

oneQUANTITY

0.99+

second partQUANTITY

0.99+

sixQUANTITY

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

OPEXORGANIZATION

0.99+

three weeksQUANTITY

0.98+

bothQUANTITY

0.98+

mayDATE

0.98+

CapExORGANIZATION

0.98+

StoreORGANIZATION

0.97+

AvishekORGANIZATION

0.96+

OneQUANTITY

0.96+

fourQUANTITY

0.95+

around 40%QUANTITY

0.95+

fiveQUANTITY

0.95+

CAPEXORGANIZATION

0.94+

DaoORGANIZATION

0.93+

AvishekPERSON

0.92+

more than a yearQUANTITY

0.92+

AWS Startup Showcase Opening


 

>>Hello and welcome today's cube presentation of eight of us startup showcase. I'm john for your host highlighting the hottest companies and devops data analytics and cloud management lisa martin and David want are here to kick it off. We've got a great program for you again. This is our, our new community event model where we're doing every quarter, we have every new episode, this is quarter three this year or episode three, season one of the hottest cloud startups and we're gonna be featured. Then we're gonna do a keynote package and then 15 countries will present their story, Go check them out and then have a closing keynote with a practitioner and we've got some great lineups, lisa Dave, great to see you. Thanks for joining me. >>Hey guys, >>great to be here. So David got to ask you, you know, back in events last night we're at the 14 it's event where they had the golf PGA championship with the cube Now we got the hybrid model, This is the new normal. We're in, we got these great companies were showcasing them. What's your take? >>Well, you're right. I mean, I think there's a combination of things. We're seeing some live shows. We saw what we did with at mobile world Congress. We did the show with AWS storage day where it was, we were at the spheres, there was no, there was a live audience, but they weren't there physically. It was just virtual and yeah, so, and I just got pained about reinvent. Hey Dave, you gotta make your flights. So I'm making my flights >>were gonna be at the amazon web services, public sector summit next week. At least a lot, a lot of cloud convergence going on here. We got many companies being featured here that we spoke with the Ceo and their top people cloud management, devops data, nelson security. Really cutting edge companies, >>yes, cutting edge companies who are all focused on acceleration. We've talked about the acceleration of digital transformation the last 18 months and we've seen a tremendous amount of acceleration in innovation with what these startups are doing. We've talked to like you said, there's, there's C suite, we've also talked to their customers about how they are innovating so quickly with this hybrid environment, this remote work and we've talked a lot about security in the last week or so. You mentioned that we were at Fortinet cybersecurity skills gap. What some of these companies are doing with automation for example, to help shorten that gap, which is a big opportunity >>for the job market. Great stuff. Dave so the format of this event, you're going to have a fireside chat with the practitioner, we'd like to end these programs with a great experienced practitioner cutting edge in data february. The beginning lisa are gonna be kicking off with of course Jeff bar to give us the update on what's going on AWS and then a special presentation from Emily Freeman who is the author of devops for dummies, she's introducing new content. The revolution in devops devops two point oh and of course jerry Chen from Greylock cube alumni is going to come on and talk about his new thesis castles in the cloud creating moats at cloud scale. We've got a great lineup of people and so the front ends can be great. Dave give us a little preview of what people can expect at the end of the fireside chat. >>Well at the highest level john I've always said we're entering that sort of third great wave of cloud. First wave was experimentation. The second big wave was migration. The third wave of integration, Deep business integration and what you're >>going to hear from >>Hello Fresh today is how they like many companies that started early last decade. They started with an on prem Hadoop system and then of course we all know what happened is S three essentially took the knees out from, from the on prem Hadoop market lowered costs, brought things into the cloud and what Hello Fresh is doing is they're transforming from that legacy Hadoop system into its running on AWS but into a data mess, you know, it's a passionate topic of mine. Hello Fresh was scaling they realized that they couldn't keep up so they had to rethink their entire data architecture and they built it around data mesh Clements key and christoph Soewandi gonna explain how they actually did that are on a journey or decentralized data >>measure it and your posts have been awesome on data measure. We get a lot of traction. Certainly you're breaking analysis for the folks watching check out David Landes, Breaking analysis every week, highlighting the cutting edge trends in tech Dave. We're gonna see you later, lisa and I are gonna be here in the morning talking about with Emily. We got Jeff Barr teed up. Dave. Thanks for coming on. Looking forward to fireside chat lisa. We'll see you when Emily comes back on. But we're gonna go to Jeff bar right now for Dave and I are gonna interview Jeff. Mm >>Hey Jeff, >>here he is. Hey, how are you? How's it going really well. So I gotta ask you, the reinvent is on, everyone wants to know that's happening right. We're good with Reinvent. >>Reinvent is happening. I've got my hotel and actually listening today, if I just remembered, I still need to actually book my flights. I've got my to do list on my desk and I do need to get my >>flights. Uh, >>really looking forward >>to it. I can't wait to see the all the announcements and blog posts. We're gonna, we're gonna hear from jerry Chen later. I love the after on our next event. Get your reaction to this castle and castles in the cloud where competitive advantages can be built in the cloud. We're seeing examples of that. But first I gotta ask you give us an update of what's going on. The ap and ecosystem has been an incredible uh, celebration these past couple weeks, >>so, so a lot of different things happening and the interesting thing to me is that as part of my job, I often think that I'm effectively living in the future because I get to see all this really cool stuff that we're building just a little bit before our customers get to, and so I'm always thinking okay, here I am now, and what's the world going to be like in a couple of weeks to a month or two when these launches? I'm working on actually get out the door and that, that's always really, really fun, just kind of getting that, that little edge into where we're going, but this year was a little interesting because we had to really significant birthdays, we had the 15 year anniversary of both EC two and S three and we're so focused on innovating and moving forward, that it's actually pretty rare for us at Aws to look back and say, wow, we've actually done all these amazing things in in the last 15 years, >>you know, it's kind of cool Jeff, if I may is is, you know, of course in the early days everybody said, well, a place for startup is a W. S and now the great thing about the startup showcases, we're seeing the startups that >>are >>very near, or some of them have even reached escape velocity, so they're not, they're not tiny little companies anymore, they're in their transforming their respective industries, >>they really are and I think that as they start ups grow, they really start to lean into the power of the cloud. They as they start to think, okay, we've we've got our basic infrastructure in place, we've got, we were serving data, we're serving up a few customers, everything is actually working pretty well for us. We've got our fundamental model proven out now, we can invest in publicity and marketing and scaling and but they don't have to think about what's happening behind the scenes. They just if they've got their auto scaling or if they're survivalists, the infrastructure simply grows to meet their demand and it's it's just a lot less things that they have to worry about. They can focus on the fun part of their business which is actually listening to customers and building up an awesome business >>Jeff as you guys are putting together all the big pre reinvented, knows a lot of stuff that goes on prior as well and they say all the big good stuff to reinvent. But you start to see some themes emerged this year. One of them is modernization of applications, the speed of application development in the cloud with the cloud scale devops personas, whatever persona you want to talk about but basically speed the speed of of the app developers where other departments have been slowing things down, I won't say name names, but security group and I t I mean I shouldn't have said that but only kidding but no but seriously people want in minutes and seconds now not days or weeks. You know whether it's policy. What are some of the trends that you're seeing around this this year as we get into some of the new stuff coming out >>So Dave customers really do want speed and for we've actually encapsulate this for a long time in amazon in what we call the bias for action leadership principle >>where >>we just need to jump in and move forward and and make things happen. A lot of customers look at that and they say yes this is great. We need to have the same bias fraction. Some do. Some are still trying to figure out exactly how to put it into play. And they absolutely for sure need to pay attention to security. They need to respect the past and make sure that whatever they're doing is in line with I. T. But they do want to move forward. And the interesting thing that I see time and time again is it's not simply about let's adopt a new technology. It's how do we >>how do we keep our workforce >>engaged? How do we make sure that they've got the right training? How do we bring our our I. T. Team along for this. Hopefully new and fun and exciting journey where they get to learn some interesting new technologies they've got all this very much accumulated business knowledge they still want to put to use, maybe they're a little bit apprehensive about something brand new and they hear about the cloud, but there by and large, they really want to move forward. They just need a little bit of >>help to make it happen >>real good guys. One of the things you're gonna hear today, we're talking about speed traditionally going fast. Oftentimes you meant you have to sacrifice some things on quality and what you're going to hear from some of the startups today is how they're addressing that to automation and modern devoPS technologies and sort of rethinking that whole application development approach. That's something I'm really excited to see organization is beginning to adopt so they don't have to make that tradeoff anymore. >>Yeah, I would >>never want to see someone >>sacrifice quality, >>but I do think that iterating very quickly and using the best of devoPS principles to be able to iterate incredibly quickly and get that first launch out there and then listen with both ears just >>as much >>as you can, Everything. You hear iterate really quickly to meet those needs in, in hours and days, not months, quarters or years. >>Great stuff. Chef and a lot of the companies were featuring here in the startup showcase represent that new kind of thinking, um, systems thinking as well as you know, the cloud scale and again and it's finally here, the revolution of deVOps is going to the next generation and uh, we're excited to have Emily Freeman who's going to come on and give a little preview for her new talk on this revolution. So Jeff, thank you for coming on, appreciate you sharing the update here on the cube. Happy >>to be. I'm actually really looking forward to hearing from Emily. >>Yeah, it's great. Great. Looking forward to the talk. Brand new Premier, Okay, uh, lisa martin, Emily Freeman is here. She's ready to come in and we're going to preview her lightning talk Emily. Um, thanks for coming on, we really appreciate you coming on really, this is about to talk around deVOPS next gen and I think lisa this is one of those things we've been, we've been discussing with all the companies. It's a new kind of thinking it's a revolution, it's a systems mindset, you're starting to see the connections there she is. Emily, Thanks for coming. I appreciate it. >>Thank you for having me. So your teaser video >>was amazing. Um, you know, that little secret radical idea, something completely different. Um, you gotta talk coming up, what's the premise behind this revolution, you know, these tying together architecture, development, automation deployment, operating altogether. >>Yes, well, we have traditionally always used the sclc, which is the software delivery life cycle. Um, and it is a straight linear process that has actually been around since the sixties, which is wild to me um, and really originated in manufacturing. Um, and as much as I love the Toyota production system and how much it has shown up in devops as a sort of inspiration on how to run things better. We are not making cars, we are making software and I think we have to use different approaches and create a sort of model that better reflects our modern software development process. >>It's a bold idea and looking forward to the talk and as motivation. I went into my basement and dusted off all my books from college in the 80s and the sea estimates it was waterfall. It was software development life cycle. They trained us to think this way and it came from the mainframe people. It was like, it's old school, like really, really old and it really hasn't been updated. Where's the motivation? I actually cloud is kind of converging everything together. We see that, but you kind of hit on this persona thing. Where did that come from this persona? Because you know, people want to put people in buckets release engineer. I mean, where's that motivation coming from? >>Yes, you're absolutely right that it came from the mainframes. I think, you know, waterfall is necessary when you're using a punch card or mag tape to load things onto a mainframe, but we don't exist in that world anymore. Thank goodness. And um, yes, so we, we use personas all the time in tech, you know, even to register, well not actually to register for this event, but a lot events. A lot of events, you have to click that drop down. Right. Are you a developer? Are you a manager, whatever? And the thing is personas are immutable in my opinion. I was a developer. I will always identify as a developer despite playing a lot of different roles and doing a lot of different jobs. Uh, and this can vary throughout the day. Right. You might have someone who has a title of software architect who ends up helping someone pair program or develop or test or deploy. Um, and so we wear a lot of hats day to day and I think our discussions around roles would be a better, um, certainly a better approach than personas >>lease. And I've been discussing with many of these companies around the roles and we're hearing from them directly and they're finding out that people have, they're mixing and matching on teams. So you're, you're an S R E on one team and you're doing something on another team where the workflows and the workloads defined the team formation. So this is a cultural discussion. >>It absolutely is. Yes. I think it is a cultural discussion and it really comes to the heart of devops, right? It's people process. And then tools deVOps has always been about culture and making sure that developers have all the tools they need to be productive and honestly happy. What good is all of this? If developing software isn't a joyful experience. Well, >>I got to ask you, I got you here obviously with server list and functions just starting to see this kind of this next gen. And we're gonna hear from jerry Chen, who's a Greylock VC who's going to talk about castles in the clouds, where he's discussing the moats that could be created with a competitive advantage in cloud scale. And I think he points to the snowflakes of the world. You're starting to see this new thing happening. This is devops 2.0, this is the revolution. Is this kind of where you see the same vision of your talk? >>Yes, so DeVOps created 2000 and 8, 2000 and nine, totally different ecosystem in the world we were living in, you know, we didn't have things like surveillance and containers, we didn't have this sort of default distributed nature, certainly not the cloud. Uh and so I'm very excited for jerry's talk. I'm curious to hear more about these moz. I think it's fascinating. Um but yeah, you're seeing different companies use different tools and processes to accelerate their delivery and that is the competitive advantage. How can we figure out how to utilize these tools in the most efficient way possible. >>Thank you for coming and giving us a preview. Let's now go to your lightning keynote talk. Fresh content. Premier of this revolution in Devops and the Freemans Talk, we'll go there now. >>Hi, I'm Emily Freeman, I'm the author of devops for dummies and the curator of 97 things every cloud engineer should know. I am thrilled to be here with you all today. I am really excited to share with you a kind of a wild idea, a complete re imagining of the S DLC and I want to be clear, I need your feedback. I want to know what you think of this. You can always find me on twitter at editing. Emily, most of my work centers around deVOps and I really can't overstate what an impact the concept of deVOPS has had on this industry in many ways it built on the foundation of Agile to become a default a standard we all reach for in our everyday work. When devops surfaced as an idea in 2008, the tech industry was in a vastly different space. AWS was an infancy offering only a handful of services. Azure and G C P didn't exist yet. The majority's majority of companies maintained their own infrastructure. Developers wrote code and relied on sys admins to deploy new code at scheduled intervals. Sometimes months apart, container technology hadn't been invented applications adhered to a monolithic architecture, databases were almost exclusively relational and serverless wasn't even a concept. Everything from the application to the engineers was centralized. Our current ecosystem couldn't be more different. Software is still hard, don't get me wrong, but we continue to find novel solutions to consistently difficult, persistent problems. Now, some of these end up being a sort of rebranding of old ideas, but others are a unique and clever take to abstracting complexity or automating toil or perhaps most important, rethinking challenging the very premises we have accepted as Cannon for years, if not decades. In the years since deVOps attempted to answer the critical conflict between developers and operations, engineers, deVOps has become a catch all term and there have been a number of derivative works. Devops has come to mean 5000 different things to 5000 different people. For some, it can be distilled to continuous integration and continuous delivery or C I C D. For others, it's simply deploying code more frequently, perhaps adding a smattering of tests for others. Still, its organizational, they've added a platform team, perhaps even a questionably named DEVOPS team or have created an engineering structure that focuses on a separation of concerns. Leaving feature teams to manage the development, deployment, security and maintenance of their siloed services, say, whatever the interpretation, what's important is that there isn't a universally accepted standard. Well, what deVOPS is or what it looks like an execution, it's a philosophy more than anything else. A framework people can utilize to configure and customize their specific circumstances to modern development practices. The characteristic of deVOPS that I think we can all agree on though, is that an attempted to capture the challenges of the entire software development process. It's that broad umbrella, that holistic view that I think we need to breathe life into again, The challenge we face is that DeVOps isn't increasingly outmoded solution to a previous problem developers now face. Cultural and technical challenge is far greater than how to more quickly deploy a monolithic application. Cloud native is the future the next collection of default development decisions and one the deVOPS story can't absorb in its current form. I believe the era of deVOPS is waning and in this moment as the sun sets on deVOPS, we have a unique opportunity to rethink rebuild free platform. Even now, I don't have a crystal ball. That would be very handy. I'm not completely certain with the next decade of tech looks like and I can't write this story alone. I need you but I have some ideas that can get the conversation started, I believe to build on what was we have to throw away assumptions that we've taken for granted all this time in order to move forward. We must first step back. Mhm. The software or systems development life cycle, what we call the S. D. L. C. has been in use since the 1960s and it's remained more or less the same since before color television and the touch tone phone. Over the last 60 or so odd years we've made tweaks, slight adjustments, massaged it. The stages or steps are always a little different with agile and deVOps we sort of looped it into a circle and then an infinity loop we've added pretty colors. But the sclc is more or less the same and it has become an assumption. We don't even think about it anymore, universally adopted constructs like the sclc have an unspoken permanence. They feel as if they have always been and always will be. I think the impact of that is even more potent. If you were born after a construct was popularized. Nearly everything around us is a construct, a model, an artifact of a human idea. The chair you're sitting in the desk, you work at the mug from which you drink coffee or sometimes wine, buildings, toilets, plumbing, roads, cars, art, computers, everything. The sclc is a remnant an artifact of a previous era and I think we should throw it away or perhaps more accurately replace it, replace it with something that better reflects the actual nature of our work. A linear, single threaded model designed for the manufacturer of material goods cannot possibly capture the distributed complexity of modern socio technical systems. It just can't. Mhm. And these two ideas aren't mutually exclusive that the sclc was industry changing, valuable and extraordinarily impactful and that it's time for something new. I believe we are strong enough to hold these two ideas at the same time, showing respect for the past while envisioning the future. Now, I don't know about you, I've never had a software project goes smoothly in one go. No matter how small. Even if I'm the only person working on it and committing directly to master software development is chaos. It's a study and entropy and it is not getting any more simple. The model with which we think and talk about software development must capture the multithreaded, non sequential nature of our work. It should embody the roles engineers take on and the considerations they make along the way. It should build on the foundations of agile and devops and represent the iterative nature of continuous innovation. Now, when I was thinking about this, I was inspired by ideas like extreme programming and the spiral model. I I wanted something that would have layers, threads, even a way of visually representing multiple processes happening in parallel. And what I settled on is the revolution model. I believe the visualization of revolution is capable of capturing the pivotal moments of any software scenario. And I'm going to dive into all the discrete elements. But I want to give you a moment to have a first impression, to absorb my idea. I call it revolution because well for one it revolves, it's circular shape reflects the continuous and iterative nature of our work, but also because it is revolutionary. I am challenging a 60 year old model that is embedded into our daily language. I don't expect Gartner to build a magic quadrant around this tomorrow, but that would be super cool. And you should call me my mission with. This is to challenge the status quo to create a model that I think more accurately reflects the complexity of modern cloud native software development. The revolution model is constructed of five concentric circles describing the critical roles of software development architect. Ng development, automating, deploying and operating intersecting each loop are six spokes that describe the production considerations every engineer has to consider throughout any engineering work and that's test, ability, secure ability, reliability, observe ability, flexibility and scalability. The considerations listed are not all encompassing. There are of course things not explicitly included. I figured if I put 20 spokes, some of us, including myself, might feel a little overwhelmed. So let's dive into each element in this model. We have long used personas as the default way to do divide audiences and tailor messages to group people. Every company in the world right now is repeating the mantra of developers, developers, developers but personas have always bugged me a bit because this approach typically either oversimplifies someone's career are needlessly complicated. Few people fit cleanly and completely into persona based buckets like developers and operations anymore. The lines have gotten fuzzy on the other hand, I don't think we need to specifically tailor messages as to call out the difference between a devops engineer and a release engineer or a security administrator versus a security engineer but perhaps most critically, I believe personas are immutable. A persona is wholly dependent on how someone identifies themselves. It's intrinsic not extrinsic. Their titles may change their jobs may differ, but they're probably still selecting the same persona on that ubiquitous drop down. We all have to choose from when registering for an event. Probably this one too. I I was a developer and I will always identify as a developer despite doing a ton of work in areas like devops and Ai Ops and Deverell in my heart. I'm a developer I think about problems from that perspective. First it influences my thinking and my approach roles are very different. Roles are temporary, inconsistent, constantly fluctuating. If I were an actress, the parts I would play would be lengthy and varied, but the persona I would identify as would remain an actress and artist lesbian. Your work isn't confined to a single set of skills. It may have been a decade ago, but it is not today in any given week or sprint, you may play the role of an architect. Thinking about how to design a feature or service, developer building out code or fixing a bug and on automation engineer, looking at how to improve manual processes. We often refer to as soil release engineer, deploying code to different environments or releasing it to customers or in operations. Engineer ensuring an application functions inconsistent expected ways and no matter what role we play. We have to consider a number of issues. The first is test ability. All software systems require testing to assure architects that designs work developers, the code works operators, that infrastructure is running as expected and engineers of all disciplines that code changes won't bring down the whole system testing in its many forms is what enables systems to be durable and have longevity. It's what reassures engineers that changes won't impact current functionality. A system without tests is a disaster waiting to happen, which is why test ability is first among equals at this particular roundtable. Security is everyone's responsibility. But if you understand how to design and execute secure systems, I struggle with this security incidents for the most part are high impact, low probability events. The really big disasters, the one that the ones that end up on the news and get us all free credit reporting for a year. They don't happen super frequently and then goodness because you know that there are endless small vulnerabilities lurking in our systems. Security is something we all know we should dedicate time to but often don't make time for. And let's be honest, it's hard and complicated and a little scary def sec apps. The first derivative of deVOPS asked engineers to move security left this approach. Mint security was a consideration early in the process, not something that would block release at the last moment. This is also the consideration under which I'm putting compliance and governance well not perfectly aligned. I figure all the things you have to call lawyers for should just live together. I'm kidding. But in all seriousness, these three concepts are really about risk management, identity, data, authorization. It doesn't really matter what specific issue you're speaking about, the question is who has access to what win and how and that is everyone's responsibility at every stage site reliability engineering or sorry, is a discipline job and approach for good reason. It is absolutely critical that applications and services work as expected. Most of the time. That said, availability is often mistakenly treated as a synonym for reliability. Instead, it's a single aspect of the concept if a system is available but customer data is inaccurate or out of sync. The system is not reliable, reliability has five key components, availability, latency, throughput. Fidelity and durability, reliability is the end result. But resiliency for me is the journey the action engineers can take to improve reliability, observe ability is the ability to have insight into an application or system. It's the combination of telemetry and monitoring and alerting available to engineers and leadership. There's an aspect of observe ability that overlaps with reliability, but the purpose of observe ability isn't just to maintain a reliable system though, that is of course important. It is the capacity for engineers working on a system to have visibility into the inner workings of that system. The concept of observe ability actually originates and linear dynamic systems. It's defined as how well internal states of a system can be understood based on information about its external outputs. If it is critical when companies move systems to the cloud or utilize managed services that they don't lose visibility and confidence in their systems. The shared responsibility model of cloud storage compute and managed services require that engineering teams be able to quickly be alerted to identify and remediate issues as they arise. Flexible systems are capable of adapting to meet the ever changing needs of the customer and the market segment, flexible code bases absorb new code smoothly. Embody a clean separation of concerns. Are partitioned into small components or classes and architected to enable the now as well as the next inflexible systems. Change dependencies are reduced or eliminated. Database schemas accommodate change well components, communicate via a standardized and well documented A. P. I. The only thing constant in our industry is change and every role we play, creating flexibility and solutions that can be flexible that will grow as the applications grow is absolutely critical. Finally, scalability scalability refers to more than a system's ability to scale for additional load. It implies growth scalability and the revolution model carries the continuous innovation of a team and the byproducts of that growth within a system. For me, scalability is the most human of the considerations. It requires each of us in our various roles to consider everyone around us, our customers who use the system or rely on its services, our colleagues current and future with whom we collaborate and even our future selves. Mhm. Software development isn't a straight line, nor is it a perfect loop. It is an ever changing complex dance. There are twirls and pivots and difficult spins forward and backward. Engineers move in parallel, creating truly magnificent pieces of art. We need a modern model for this modern era and I believe this is just the revolution to get us started. Thank you so much for having me. >>Hey, we're back here. Live in the keynote studio. I'm john for your host here with lisa martin. David lot is getting ready for the fireside chat ending keynote with the practitioner. Hello! Fresh without data mesh lisa Emily is amazing. The funky artwork there. She's amazing with the talk. I was mesmerized. It was impressive. >>The revolution of devops and the creative element was a really nice surprise there. But I love what she's doing. She's challenging the status quo. If we've learned nothing in the last year and a half, We need to challenge the status quo. A model from the 1960s that is no longer linear. What she's doing is revolutionary. >>And we hear this all the time. All the cube interviews we do is that you're seeing the leaders, the SVP's of engineering or these departments where there's new new people coming in that are engineering or developers, they're playing multiple roles. It's almost a multidisciplinary aspect where you know, it's like going into in and out burger in the fryer later and then you're doing the grill, you're doing the cashier, people are changing roles or an architect, their test release all in one no longer departmental, slow siloed groups. >>She brought up a great point about persona is that we no longer fit into these buckets. That the changing roles. It's really the driver of how we should be looking at this. >>I think I'm really impressed, really bold idea, no brainer as far as I'm concerned, I think one of the things and then the comments were off the charts in a lot of young people come from discord servers. We had a good traction over there but they're all like learning. Then you have the experience, people saying this is definitely has happened and happening. The dominoes are falling and they're falling in the direction of modernization. That's the key trend speed. >>Absolutely with speed. But the way that Emily is presenting it is not in a brash bold, but it's in a way that makes great sense. The way that she creatively visually lined out what she was talking about Is amenable to the folks that have been doing this for since the 60s and the new folks now to really look at this from a different >>lens and I think she's a great setup on that lightning top of the 15 companies we got because you think about sis dig harness. I white sourced flamingo hacker one send out, I oh, okay. Thought spot rock set Sarah Ops ramp and Ops Monte cloud apps, sani all are doing modern stuff and we talked to them and they're all on this new wave, this monster wave coming. What's your observation when you talk to these companies? >>They are, it was great. I got to talk with eight of the 15 and the amount of acceleration of innovation that they've done in the last 18 months is phenomenal obviously with the power and the fuel and the brand reputation of aws but really what they're all facilitating cultural shift when we think of devoPS and the security folks. Um, there's a lot of work going on with ai to an automation to really kind of enabled to develop the develops folks to be in control of the process and not have to be security experts but ensuring that the security is baked in shifting >>left. We saw that the chat room was really active on the security side and one of the things I noticed was not just shift left but the other groups, the security groups and the theme of cultural, I won't say war but collision cultural shift that's happening between the groups is interesting because you have this new devops persona has been around Emily put it out for a while. But now it's going to the next level. There's new revolutions about a mindset, a systems mindset. It's a thinking and you start to see the new young companies coming out being funded by the gray locks of the world who are now like not going to be given the we lost the top three clouds one, everything. there's new business models and new technical architecture in the cloud and that's gonna be jerry Chen talk coming up next is going to be castles in the clouds because jerry chant always talked about moats, competitive advantage and how moats are key to success to guard the castle. And then we always joke, there's no more moz because the cloud has killed all the boats. But now the motor in the cloud, the castles are in the cloud, not on the ground. So very interesting thought provoking. But he's got data and if you look at the successful companies like the snowflakes of the world, you're starting to see these new formations of this new layer of innovation where companies are growing rapidly, 98 unicorns now in the cloud. Unbelievable, >>wow, that's a lot. One of the things you mentioned, there's competitive advantage and these startups are all fueled by that they know that there are other companies in the rear view mirror right behind them. If they're not able to work as quickly and as flexibly as a competitor, they have to have that speed that time to market that time to value. It was absolutely critical. And that's one of the things I think thematically that I saw along the eighth sort of that I talked to is that time to value is absolutely table stakes. >>Well, I'm looking forward to talking to jerry chan because we've talked on the queue before about this whole idea of What happens when winner takes most would mean the top 3, 4 cloud players. What happens? And we were talking about that and saying, if you have a model where an ecosystem can develop, what does that look like and back in 2013, 2014, 2015, no one really had an answer. Jerry was the only BC. He really nailed it with this castles in the cloud. He nailed the idea that this is going to happen. And so I think, you know, we'll look back at the tape or the videos from the cube, we'll find those cuts. But we were talking about this then we were pontificating and riffing on the fact that there's going to be new winners and they're gonna look different as Andy Jassy always says in the cube you have to be misunderstood if you're really going to make something happen. Most of the most successful companies are misunderstood. Not anymore. The cloud scales there. And that's what's exciting about all this. >>It is exciting that the scale is there, the appetite is there the appetite to challenge the status quo, which is right now in this economic and dynamic market that we're living in is there's nothing better. >>One of the things that's come up and and that's just real quick before we bring jerry in is automation has been insecurity, absolutely security's been in every conversation, but automation is now so hot in the sense of it's real and it's becoming part of all the design decisions. How can we automate can we automate faster where the keys to automation? Is that having the right data, What data is available? So I think the idea of automation and Ai are driving all the change and that's to me is what these new companies represent this modern error where AI is built into the outcome and the apps and all that infrastructure. So it's super exciting. Um, let's check in, we got jerry Chen line at least a great. We're gonna come back after jerry and then kick off the day. Let's bring in jerry Chen from Greylock is he here? Let's bring him in there. He is. >>Hey john good to see you. >>Hey, congratulations on an amazing talk and thesis on the castles on the cloud. Thanks for coming on. >>All right, Well thanks for reading it. Um, always were being put a piece of workout out either. Not sure what the responses, but it seemed to resonate with a bunch of developers, founders, investors and folks like yourself. So smart people seem to gravitate to us. So thank you very much. >>Well, one of the benefits of doing the Cube for 11 years, Jerry's we have videotape of many, many people talking about what the future will hold. You kind of are on this early, it wasn't called castles in the cloud, but you were all I was, we had many conversations were kind of connecting the dots in real time. But you've been on this for a while. It's great to see the work. I really think you nailed this. I think you're absolutely on point here. So let's get into it. What is castles in the cloud? New research to come out from Greylock that you spearheaded? It's collaborative effort, but you've got data behind it. Give a quick overview of what is castle the cloud, the new modes of competitive advantage for companies. >>Yeah, it's as a group project that our team put together but basically john the question is, how do you win in the cloud? Remember the conversation we had eight years ago when amazon re event was holy cow, Like can you compete with them? Like is it a winner? Take all? Winner take most And if it is winner take most, where are the white spaces for Some starts to to emerge and clearly the past eight years in the cloud this journey, we've seen big companies, data breaks, snowflakes, elastic Mongo data robot. And so um they spotted the question is, you know, why are the castles in the cloud? The big three cloud providers, Amazon google and Azure winning. You know, what advantage do they have? And then given their modes of scale network effects, how can you as a startup win? And so look, there are 500 plus services between all three cloud vendors, but there are like 500 plus um startups competing gets a cloud vendors and there's like almost 100 unicorn of private companies competing successfully against the cloud vendors, including public companies. So like Alaska, Mongo Snowflake. No data breaks. Not public yet. Hashtag or not public yet. These are some examples of the names that I think are winning and watch this space because you see more of these guys storm the castle if you will. >>Yeah. And you know one of the things that's a funny metaphor because it has many different implications. One, as we talk about security, the perimeter of the gates, the moats being on land. But now you're in the cloud, you have also different security paradigm. You have a different um, new kinds of services that are coming on board faster than ever before. Not just from the cloud players but From companies contributing into the ecosystem. So the combination of the big three making the market the main markets you, I think you call 31 markets that we know of that probably maybe more. And then you have this notion of a sub market, which means that there's like we used to call it white space back in the day, remember how many whites? Where's the white space? I mean if you're in the cloud, there's like a zillion white spaces. So talk about this sub market dynamic between markets and that are being enabled by the cloud players and how these sub markets play into it. >>Sure. So first, the first problem was what we did. We downloaded all the services for the big three clowns. Right? And you know what as recalls a database or database service like a document DB and amazon is like Cosmo dB and Azure. So first thing first is we had to like look at all three cloud providers and you? Re categorize all the services almost 500 Apples, Apples, Apples # one number two is you look at all these markets or sub markets and said, okay, how can we cluster these services into things that you know you and I can rock right. That's what amazon Azure and google think about. It is very different and the beauty of the cloud is this kind of fat long tail of services for developers. So instead of like oracle is a single database for all your needs. They're like 20 or 30 different databases from time series um analytics, databases. We're talking rocks at later today. Right. Um uh, document databases like Mongo search database like elastic. And so what happens is there's not one giant market like databases, there's a database market And 30, 40 sub markets that serve the needs developers. So the Great News is cloud has reduced the cost and create something that new for developers. Um also the good news is for a start up you can find plenty of white speeds solving a pain point, very specific to a different type of problem >>and you can sequence up to power law to this. I love the power of a metaphor, you know, used to be a very thin neck note no torso and then a long tail. But now as you're pointing out this expansion of the fat tail of services, but also there's big tam's and markets available at the top of the power law where you see coming like snowflake essentially take on the data warehousing market by basically sitting on amazon re factoring with new services and then getting a flywheel completely changing the economic unit economics completely changing the consumption model completely changing the value proposition >>literally you >>get Snowflake has created like a storm, create a hole, that mode or that castle wall against red shift. Then companies like rock set do your real time analytics is Russian right behind snowflakes saying, hey snowflake is great for data warehouse but it's not fast enough for real time analytics. Let me give you something new to your, to your parallel argument. Even the big optic snowflake have created kind of a wake behind them that created even more white space for Gaza rock set. So that's exciting for guys like me and >>you. And then also as we were talking about our last episode two or quarter two of our showcase. Um, from a VC came on, it's like the old shelf where you didn't know if a company's successful until they had to return the inventory now with cloud you if you're not successful, you know it right away. It's like there's no debate. Like, I mean you're either winning or not. This is like that's so instrumented so a company can have a good better mousetrap and win and fill the white space and then move up. >>It goes both ways. The cloud vendor, the big three amazon google and Azure for sure. They instrument their own class. They know john which ecosystem partners doing well in which ecosystems doing poorly and they hear from the customers exactly what they want. So it goes both ways they can weaponize that. And just as well as you started to weaponize that info >>and that's the big argument of do that snowflake still pays the amazon bills. They're still there. So again, repatriation comes back, That's a big conversation that's come up. What's your quick take on that? Because if you're gonna have a castle in the cloud, then you're gonna bring it back to land. I mean, what's that dynamic? Where do you see that compete? Because on one hand is innovation. The other ones maybe cost efficiency. Is that a growth indicator slow down? What's your view on the movement from and to the cloud? >>I think there's probably three forces you're finding here. One is the cost advantage in the scale advantage of cloud so that I think has been going for the past eight years, there's a repatriation movement for a certain subset of customers, I think for cost purposes makes sense. I think that's a tiny handful that believe they can actually run things better than a cloud. The third thing we're seeing around repatriation is not necessary against cloud, but you're gonna see more decentralized clouds and things pushed to the edge. Right? So you look at companies like Cloudflare Fastly or a company that we're investing in Cato networks. All ideas focus on secure access at the edge. And so I think that's not the repatriation of my own data center, which is kind of a disaggregated of cloud from one giant monolithic cloud, like AWS east or like a google region in europe to multiple smaller clouds for governance purposes, security purposes or legacy purposes. >>So I'm looking at my notes here, looking down on the screen here for this to read this because it's uh to cut and paste from your thesis on the cloud. The excellent cloud. The of the $38 billion invested this quarter. Um Ai and ml number one, um analytics. Number two, security number three. Actually, security number one. But you can see the bubbles here. So all those are data problems I need to ask you. I see data is hot data as intellectual property. How do you look at that? Because we've been reporting on this and we just started the cube conversation around workflows as intellectual property. If you have scale and your motives in the cloud. You could argue that data and the workflows around those data streams is intellectual property. It's a protocol >>I believe both are. And they just kind of go hand in hand like peanut butter and jelly. Right? So data for sure. I. P. So if you know people talk about days in the oil, the new resource. That's largely true because of powers a bunch. But the workflow to your point john is sticky because every company is a unique snowflake right? Like the process used to run the cube and your business different how we run our business. So if you can build a workflow that leverages the data, that's super sticky. So in terms of switching costs, if my work is very bespoke to your business, then I think that's competitive advantage. >>Well certainly your workflow is a lot different than the cube. You guys just a lot of billions of dollars in capital. We're talking to all the people out here jerry. Great to have you on final thought on your thesis. Where does it go from here? What's been the reaction? Uh No, you put it out there. Great love the restart. Think you're on point on this one. Where did we go from here? >>We have to follow pieces um in the near term one around, you know, deep diver on open source. So look out for that pretty soon and how that's been a powerful strategy a second. Is this kind of just aggregation of the cloud be a Blockchain and you know, decentralized apps, be edge applications. So that's in the near term two more pieces of, of deep dive we're doing. And then the goal here is to update this on a quarterly and annual basis. So we're getting submissions from founders that wanted to say, hey, you missed us or he screwed up here. We got the big cloud vendors saying, Hey jerry, we just lost his new things. So our goal here is to update this every single year and then probably do look back saying, okay, uh, where were we wrong? We're right. And then let's say the castle clouds 2022. We'll see the difference were the more unicorns were there more services were the IPO's happening. So look for some short term work from us on analytics, like around open source and clouds. And then next year we hope that all of this forward saying, Hey, you have two year, what's happening? What's changing? >>Great stuff and, and congratulations on the southern news. You guys put another half a billion dollars into early, early stage, which is your roots. Are you still doing a lot of great investments in a lot of unicorns. Congratulations that. Great luck on the team. Thanks for coming on and congratulations you nailed this one. I think I'm gonna look back and say that this is a pretty seminal piece of work here. Thanks for sharing. >>Thanks john thanks for having us. >>Okay. Okay. This is the cube here and 81 startup showcase. We're about to get going in on all the hot companies closing out the kino lisa uh, see jerry Chen cube alumni. He was right from day one. We've been riffing on this, but he nails it here. I think Greylock is lucky to have him as a general partner. He's done great deals, but I think he's hitting the next wave big. This is, this is huge. >>I was listening to you guys talking thinking if if you had a crystal ball back in 2013, some of the things Jerry saying now his narrative now, what did he have a crystal >>ball? He did. I mean he could be a cuBA host and I could be a venture capital. We were both right. I think so. We could have been, you know, doing that together now and all serious now. He was right. I mean, we talked off camera about who's the next amazon who's going to challenge amazon and Andy Jassy was quoted many times in the queue by saying, you know, he was surprised that it took so long for people to figure out what they were doing. Okay, jerry was that VM where he had visibility into the cloud. He saw amazon right away like we did like this is a winning formula and so he was really out front on this one. >>Well in the investments that they're making in these unicorns is exciting. They have this, this lens that they're able to see the opportunities there almost before anybody else can. And finding more white space where we didn't even know there was any. >>Yeah. And what's interesting about the report I'm gonna dig into and I want to get to him while he's on camera because it's a great report, but He says it's like 500 services I think Amazon has 5000. So how you define services as an interesting thing and a lot of amazon services that they have as your doesn't have and vice versa, they do call that out. So I find the report interesting. It's gonna be a feature game in the future between clouds the big three. They're gonna say we do this, you're starting to see the formation, Google's much more developer oriented. Amazon is much more stronger in the governance area with data obviously as he pointed out, they have such experience Microsoft, not so much their developer cloud and more office, not so much on the government's side. So that that's an indicator of my, my opinion of kind of where they rank. So including the number one is still amazon web services as your long second place, way behind google, right behind Azure. So we'll see how the horses come in, >>right. And it's also kind of speaks to the hybrid world in which we're living the hybrid multi cloud world in which many companies are living as companies to not just survive in the last year and a half, but to thrive and really have to become data companies and leverage that data as a competitive advantage to be able to unlock the value of it. And a lot of these startups that we talked to in the showcase are talking about how they're helping organizations unlock that data value. As jerry said, it is the new oil, it's the new gold. Not unless you can unlock that value faster than your competition. >>Yeah, well, I'm just super excited. We got a great day ahead of us with with all the cots startups. And then at the end day, Volonte is gonna interview, hello, fresh practitioners, We're gonna close it out every episode now, we're going to do with the closing practitioner. We try to get jpmorgan chase data measures. The hottest area right now in the enterprise data is new competitive advantage. We know that data workflows are now intellectual property. You're starting to see data really factoring into these applications now as a key aspect of the competitive advantage and the value creation. So companies that are smart are investing heavily in that and the ones that are kind of slow on the uptake are lagging the market and just trying to figure it out. So you start to see that transition and you're starting to see people fall away now from the fact that they're not gonna make it right, You're starting to, you know, you can look at look at any happens saying how much ai is really in there. Real ai what's their data strategy and you almost squint through that and go, okay, that's gonna be losing application. >>Well the winners are making it a board level conversation >>And security isn't built in. Great to have you on this morning kicking it off. Thanks John Okay, we're going to go into the next set of the program at 10:00 we're going to move into the breakouts. Check out the companies is three tracks in there. We have an awesome track on devops pure devops. We've got the data and analytics and we got the cloud management and just to run down real quick check out the sis dig harness. Io system is doing great, securing devops harness. IO modern software delivery platform, White Source. They're preventing and remediating the rest of the internet for them for the company's that's a really interesting and lumbago, effortless acres land and monitoring functions, server list super hot. And of course hacker one is always great doing a lot of great missions and and bounties you see those success continue to send i O there in Palo alto changing the game on data engineering and data pipe lining. Okay. Data driven another new platform, horizontally scalable and of course thought spot ai driven kind of a search paradigm and of course rock set jerry Chen's companies here and press are all doing great in the analytics and then the cloud management cost side 80 operations day to operate. Ops ramps and ops multi cloud are all there and sunny, all all going to present. So check them out. This is the Cubes Adria's startup showcase episode three.

Published Date : Sep 23 2021

SUMMARY :

the hottest companies and devops data analytics and cloud management lisa martin and David want are here to kick the golf PGA championship with the cube Now we got the hybrid model, This is the new normal. We did the show with AWS storage day where the Ceo and their top people cloud management, devops data, nelson security. We've talked to like you said, there's, there's C suite, Dave so the format of this event, you're going to have a fireside chat Well at the highest level john I've always said we're entering that sort of third great wave of cloud. you know, it's a passionate topic of mine. for the folks watching check out David Landes, Breaking analysis every week, highlighting the cutting edge trends So I gotta ask you, the reinvent is on, everyone wants to know that's happening right. I've got my to do list on my desk and I do need to get my Uh, and castles in the cloud where competitive advantages can be built in the cloud. you know, it's kind of cool Jeff, if I may is is, you know, of course in the early days everybody said, the infrastructure simply grows to meet their demand and it's it's just a lot less things that they have to worry about. in the cloud with the cloud scale devops personas, whatever persona you want to talk about but And the interesting to put to use, maybe they're a little bit apprehensive about something brand new and they hear about the cloud, One of the things you're gonna hear today, we're talking about speed traditionally going You hear iterate really quickly to meet those needs in, the cloud scale and again and it's finally here, the revolution of deVOps is going to the next generation I'm actually really looking forward to hearing from Emily. we really appreciate you coming on really, this is about to talk around deVOPS next Thank you for having me. Um, you know, that little secret radical idea, something completely different. that has actually been around since the sixties, which is wild to me um, dusted off all my books from college in the 80s and the sea estimates it And the thing is personas are immutable in my opinion. And I've been discussing with many of these companies around the roles and we're hearing from them directly and they're finding sure that developers have all the tools they need to be productive and honestly happy. And I think he points to the snowflakes of the world. and processes to accelerate their delivery and that is the competitive advantage. Let's now go to your lightning keynote talk. I figure all the things you have to call lawyers for should just live together. David lot is getting ready for the fireside chat ending keynote with the practitioner. The revolution of devops and the creative element was a really nice surprise there. All the cube interviews we do is that you're seeing the leaders, the SVP's of engineering It's really the driver of how we should be looking at this. off the charts in a lot of young people come from discord servers. the folks that have been doing this for since the 60s and the new folks now to really look lens and I think she's a great setup on that lightning top of the 15 companies we got because you ensuring that the security is baked in shifting happening between the groups is interesting because you have this new devops persona has been One of the things you mentioned, there's competitive advantage and these startups are He nailed the idea that this is going to happen. It is exciting that the scale is there, the appetite is there the appetite to challenge and Ai are driving all the change and that's to me is what these new companies represent Thanks for coming on. So smart people seem to gravitate to us. Well, one of the benefits of doing the Cube for 11 years, Jerry's we have videotape of many, Remember the conversation we had eight years ago when amazon re event So the combination of the big three making the market the main markets you, of the cloud is this kind of fat long tail of services for developers. I love the power of a metaphor, Even the big optic snowflake have created kind of a wake behind them that created even more Um, from a VC came on, it's like the old shelf where you didn't know if a company's successful And just as well as you started to weaponize that info and that's the big argument of do that snowflake still pays the amazon bills. One is the cost advantage in the So I'm looking at my notes here, looking down on the screen here for this to read this because it's uh to cut and paste But the workflow to your point Great to have you on final thought on your thesis. We got the big cloud vendors saying, Hey jerry, we just lost his new things. Great luck on the team. I think Greylock is lucky to have him as a general partner. into the cloud. Well in the investments that they're making in these unicorns is exciting. Amazon is much more stronger in the governance area with data And it's also kind of speaks to the hybrid world in which we're living the hybrid multi So companies that are smart are investing heavily in that and the ones that are kind of slow We've got the data and analytics and we got the cloud management and just to run down real quick

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Emily FreemanPERSON

0.99+

EmilyPERSON

0.99+

JeffPERSON

0.99+

DavidPERSON

0.99+

2008DATE

0.99+

Andy JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2013DATE

0.99+

AmazonORGANIZATION

0.99+

2015DATE

0.99+

amazonORGANIZATION

0.99+

2014DATE

0.99+

JohnPERSON

0.99+

20 spokesQUANTITY

0.99+

lisa martinPERSON

0.99+

jerry ChenPERSON

0.99+

20QUANTITY

0.99+

11 yearsQUANTITY

0.99+

$38 billionQUANTITY

0.99+

JerryPERSON

0.99+

Jeff BarrPERSON

0.99+

ToyotaORGANIZATION

0.99+

lisa DavePERSON

0.99+

500 servicesQUANTITY

0.99+

jpmorganORGANIZATION

0.99+

lisaPERSON

0.99+

31 marketsQUANTITY

0.99+

europeLOCATION

0.99+

two ideasQUANTITY

0.99+

15 companiesQUANTITY

0.99+

firstQUANTITY

0.99+

next yearDATE

0.99+

15 countriesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

each elementQUANTITY

0.99+

last weekDATE

0.99+

AWSORGANIZATION

0.99+

first impressionQUANTITY

0.99+

5000QUANTITY

0.99+

eight years agoDATE

0.99+

both waysQUANTITY

0.99+

februaryDATE

0.99+

two yearQUANTITY

0.99+

OneQUANTITY

0.99+

next weekDATE

0.99+

googleORGANIZATION

0.99+

David LandesPERSON

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

eightQUANTITY

0.99+

GazaLOCATION

0.99+

twoQUANTITY

0.99+

97 thingsQUANTITY

0.98+

Nick Durkin, Harness io | AWS Startup Showcase


 

>> Welcome to The Cube Startup Showcase made possible by AWS. In this session, we're going to dig into how organizations can improve governance and use AI to increase confidence and trust in their software delivery process. My name is Dave Vellante and joining me is Nick Durkin, who's the field CTO of Harness IO. Nick, thanks for joining. >> Thank you so much for having having me on here. I appreciate it. >> Give us the overview of the company, let's, let's start with what you guys are all about. >> Yeah, I think when you look at Harness specifically, it started as continuous delivery as a service. And we really have grown from that and become a true modern software delivery platform. And with everything that we deliver, we do this with artificial intelligence and machine learning in mind, to remove all of the tasks that we hate. No one wants to babysit deployments. No one wants to sit there and watch tests run that we don't need to run. And so really taking an artificial intelligence approach to software delivery. >> Great. Let's talk about software delivery and maybe we can dig in to some of the trends that you're seeing, maybe the drivers that are leading people to new approaches, you know, some of the challenges that customers face which are also opportunities. >> Absolutely. You know, it's interesting. We look at everyone on their journey for software delivery and traditionally velocity is actually what brings people into to either deploying faster or we need to get and modernize our platforms quicker. And so velocity is the driver traditionally to bringing new tools and new technology. And what's interesting is that governance while equally, if not more important, that's often second fiddle. And so what we find is that customers go on a journey where they use their CI tool and they expand it. They use their open source offerings that they have from modern technologies to create velocity and go fast. But then what they quickly find out is they have to govern this. And whether that's for regulatory purposes or whether that's for just internal processes, right? This becomes the hard part. And a lot of people have to script it. And if someone's actually able to achieve say velocity and governance, this is where now we're reaching, you know, speeds now that we actually wanted. So we're now deploying end times end faster, our monoliths are turning into microservices and we're actually deploying infinitely quicker. And now this becomes a problem because we don't know what broke what. And so if you can achieve velocity in governance, the next problem that people have is traditionally quality. >> Yeah, so you know, that lack of governance, that's a real challenge because you're now seeing even more stress as data becomes prone to those same processes, DevOps for data, if you will. So now you got the whole privacy and governance slamming together, and people want to automate it as fast as they possibly can. So this puts even more stress on developers. And I think your point is they've got to go faster, but that's antithetical to quality. So therein lies the conundrum, but the answer is automation, machine intelligence. Maybe you could double click on that. >> Yeah, sure. No, that's exactly it. If you think about it, there's not enough people at these companies to sit there and look at the knock and understand what normal looks like. There's not enough people to look at every line of log to understand what's going on and what broke. And this is where you can start leveraging artificial intelligence to understand what does normal look like. And when you think about it, they are traditionally opposing forces, velocity and governance. But the reality is when we talk about software delivery, oftentimes people will say and bring in tools, people, processes, and tools are people process and technology. And the reality is it's all entirely about confidence in your people. And whether it's a tool or whether it's a process that provides that confidence, if that's what they're looking for is confidence that their developers can deploy when they want to as needed. And if something goes wrong, it will be taken care of. And so back to your point, Dave, specifically, when we think about software delivery, we think about continuous delivery we really mean automate everything. Right? From start to finish. And that means with all of the guard rails and all the rules that you need for governance, so that you can meet those security requirements, you meet those regulatory requirements while still empowering developers. >> You know one of the other things that obviously has changed in the last 10 years is cloud and cloud adoption, and cloud costs, everybody looks at their bill at the end of the month. They go, okay, I love it because of driving new business models, but hey, can we figure out how to control these costs a little bit? What is the role of developers in terms of controlling cloud costs? How can they impact that? >> Sure. No. If you think about this whole shift left paradigm, and we're now empowering developers to do more and more, what we're not giving them is the inputs that they need to effectively do their job. If you want engineers to care about costs, it's something they need visibility to. If you want, if you want the administrators to function out of, you know, a cost mindset, it needs to be something that's part of their daily information that they have. And today that's not how it works. Today a CFO will call down and say, hey, we're spending way too much money. You know, I just got one. We spent over $35,000 on some test clusters and I got a phone call from our CFO. Just like everyone else does. And then we had to go fix it. Instead of giving people who honestly would do no wrong if they had the information in front of them, giving them that information. So if you solve velocity with governance and now even solve for quality, the next thing that you have coming is cost. You're now going to be deploying infinitely faster to the cloud with so many changes that you can't keep, can't keep track of it. And you need that same auditability that you'd have with a governance platform to show you what you're changing in cost. So now what you want to do is empower the same engineers to know what changed they made, what it modified, how it affected it, but also how it affected costs. And if you give that to the engineers and the people that can affect change, it's amazing what happens. >> I want to come back to this notion of data challenges, because applications are increasingly more data centric. You put your data in the cloud, great, but then people realize, oh, the clouds expanding is going out to the edge. And so data by its very nature is, is distributed. People want more control of the data, the lines of business, the domain experts that it's self service that creates a new problem around governance. And when I talk to practitioners, what I'm hearing is as they embark on this journey, because everything used to be, you know, shoved in one place, a big monolith, and that's a limiter to scale. What they'll do is they'll phase it. They'll say, okay, phase zero, we're kind of process builders. We've got to figure out, okay, how is governance is going to work? And then as fast as they possibly can, they'll codify that so that they can automate it. Do you see that evolution in, in governance? How is it playing out in your world? >> Absolutely, I think, you know, you made mention of data and really data has gravity. But to your point, what we find is that people want choice. Right? What drives where they place their data, where they place their applications. It's on choice, it's on a lot of different things. And one of the things that we found is that to that point, if you can't define those processes, those policies, those procedures, to meet your governance in any of the clouds, this becomes now a burden on your employees. If they only have it for one specific location, whether it be on premise or whether it be in the cloud. Now you have to move to another cloud, or to another place. Now it's just all that much more rework. And the reality is the tooling that you have should allow you, or allow your engineers to deploy wherever is needed, whether it's on Amazon or Azure or, or, you know, primarily when we think about it all over the different Amazon and pieces, when we want to go and I want to deploy to say Amazon EKS or EKS anywhere, and I want to have them physically on the data center, or if I want to have them, you know, up in the cloud, this shouldn't be something that our engineers have to care about. Whether I'm putting on an ECS or on, on EC2 instances. Those are things that our engineers shouldn't have to care about, and the governance should allow you to do it to the appropriate locations when required. So what should happen, ultimately, if you, if you craft this accordingly, this should be designed so that at any point in time, your engineers can't make that mistake. They can't put data in the wrong place. They can't put applications in the wrong place because the governance will hold them to it. And you'll know why, so that you can fix it. And if you create that type of behavior, then there are no mistakes, right? Allow people the freedom to deploy. And as long as they do it within the rules, it'll work. >> I wonder if we could bring up that previous slide again and talk about the velocity of the governance, the quality and efficiency, which, which is most important when you talk to customers? >> Yeah, absolutely. I think depending on where they're at in the journey, velocity might be the thing that's hurting them right now. We have to solve for it. In which case let's go grab a whole bunch of open source tools and let's go grab all the things that we have and start scripting things. And what we find is that oftentimes customers come to us when they realize I've got all this, but now I need to make sure it's governed. And this is where it's hard. And that's where people will actually, you know, if you will, phone a friend, and look for some help, because this is complex and it's not something you want to do on your own, especially if you've been doing it for the last nine years, you don't want to do it again for a new technology or a new space. And then when we think about it, if you've actually achieved governance, which a lot of say, like regulatory, you know, based customers have, quality becomes their part, where they need help. And so really it depends on where the customer's at in their journey, but I can guarantee you, everyone's looking for one of these four pieces and that's, what's their bottleneck right now. And it's really being able to provide the resolution to any of those bottlenecks out of the gate. Right? You want to make sure that if you have that coming, you're, you're prepared for it. And you have a tool that can help you as you're going to progress in those phases. >> If I understand it, your strategies, that will help customers optimize wherever they are in that journey. You know, they might be in a cloud migration. It's like, hey, we got to go fast, let's go. And then their attention is going to shift to governance. And then eventually as they get more mature, it's going to be okay, hey, we've got this down. Now we're going to lower our cost and be more efficient. So let's talk about how you do this, here's a graphic that really speaks to your platform. Nick, why don't you walk us through this? >> Yeah, sure. I think when you think about software delivery, we traditionally will think about CI and CD. And so I think that's where we can start, but there's a lot more to software delivery. And so that's what we'll offer different pieces. One of the benefits of the Harness Platform, it is not, you're not locked into every single part and piece. You already have different technologies that you want to use by all means, use them, but we do offer you those technologies if they can help. And every one of these is designed around that idea and understanding of AI and machine learning at its core. So if you think about it, we started life as continuous delivery as a service. So taking what we would consider artifact to customer. And that's really what we think about when you think about continuous delivery. And in there, we want to think about all of the things that happen after delivery, the same way your best engineers would. So we're going to look at your performance metrics and the business metrics that you have. And we're going to think about them the same way, your best engineers would, but we're also going to look at those logs and understand exactly the same way that yep, that's fine, that's fine. Hey, what's, what's that over there? And this is where we use AI and ML not to do what people love doing, but to do what people hate doing, which is babysitting deployments. When you think about CI, we traditionally think about code two artifacts. So that's the first part and there, Harness, we acquired Drone, the most loved open source CI tool on the planet. And I can't make that up because, you know, you can go to GitHub and actually look it up. So people comment on this and we decided to invest in it, quadruple the team, and then add all of those security governance quality pieces to it. And then even one step further, add some more of that artificial intelligence. Dave, I'll ask you a direct question. If you were to change to the gas cap on your car, would you, after you change that, would you go check every single electrical device and electrical switch on your car to make sure it works? >> I hope not. (laughter) >> You hope not, right? But the funny thing, and the interesting thing is that when we do tests today in our CI tools today, if you make one change, our customers, or actually every customer, is testing everything every single time, instead of being more intelligent about it and only testing those things that matter. And so again, bringing those costs down, bringing that effort down and bringing that toil down across the board, this stands true for feature flags. If you want to get into more granular things and say complex deployments, you want to do this with feature flagging, to allow different customers, to be able to turn on and off different features for different regions or different reasons. Now, this is built into the same tooling where you can apply it to a pipeline and then have that verification after. So you really get that opportunity and that ability to use AI, to do what people are manually calling customers and determining whether it looks okay or are waiting and looking at, you know, output on a screen, now you can have machine learning, handle it for you. And really this is where it's designed to, as you move through that and any type of change that you wanted to do, whether it be in databases or in different network topology and using that same machine learning and verification. And then last thing that cost piece, a lot of people will say that a cloud cost tool does not belong in software delivery. And if you believe in shift left and you believe in giving people all the inputs, and I think you'd probably disagree, you'd actually fight yourself on that and say, it does probably live here. And that's what we want to bring that data, and not only visibility, we want to bring recommendations on how to fix it and bring actionability. So actually start taking action right away to bring costs down. >> Yeah but see you're not just making software delivery better. You're rethinking the approach. You're not just paving the cow path, sometimes I say, you're not doing that. You're reinventing, you know, to use an AWS term. >> Well, we actually did that specifically. When we said we wanted to build continuous delivery. We wanted to do it in a way and in a shape that wasn't copying the way that other CI tools had done it with expanding CI. We said, you shouldn't have to know what, how to write complex deployments. You shouldn't have to care whether you're on an EKS cluster, that's on premise or, or up in the cloud, your engineers shouldn't have to care and we should extract that from them. Right? And that's what we did there. And so to your point with all of these pieces, yes, we're rethinking them. We're not going ahead and just taking and paving that same path, like you said, we're truly trying to make it usable and viable for those that can use it. >> What do people buy from you? Is this a subscription? Is it a consumption-based model? How does that all work? >> Yeah, great question. So it is, it's a subscription and ultimately we're a software delivery company, but we're continuous delivery company. So unlike other people that will talk to you about, you know, updates and new versions and new pieces, we deploy a new version of our software at least once a day, we practice what we preach. And if you're going to continue to deliver software with somebody who doesn't do it themselves, you should probably ask yourself how, if they can't trust themselves to do it, are you going to? But the reality is depending on what you need, you only have to pay for what you need. So it's not like other platforms where you have pay for everything and only only use a part and piece of it. So for every aspect that you want and, or need, you're more than welcome to use it. And, I'll say something that my sales people probably don't like, but you know, we've never lost a deal on cost, right? We're here to show you value and ultimately make sure that it can help you and your customers, and that's what we do. >> Well this is clearly the trend in software pricing. We're seeing it's true cloud pricing, it's consumption pricing, it's you, you seem to have got it right in a, in a hot area that's why the investors are getting behind you. Nick Durkin of Harness IO. Excellent, thanks so much for your time, thanks for your insights. Really appreciate it. >> I appreciate it. Thank you so much, Dave. Thank you for having me on. >> You're welcome. Okay you're watching The Cube's Startup Showcase made possible by AWS, new breakthroughs in dev ops data analytics and cloud management tools. Keep it right there. (soft music)

Published Date : Sep 15 2021

SUMMARY :

Welcome to The Cube Startup Thank you so much for you guys are all about. Yeah, I think when you you know, some of the challenges And so if you can achieve Yeah, so you know, And this is where you can You know one of the other things that And if you give that to the and that's a limiter to scale. And if you create that type of behavior, And you have a tool that can So let's talk about how you do this, technologies that you want to use I hope not. And if you believe in shift You're reinventing, you know, And so to your point with to do it, are you going to? you seem to have got it Thank you so much, Dave. Keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Nick DurkinPERSON

0.99+

DavePERSON

0.99+

NickPERSON

0.99+

AWSORGANIZATION

0.99+

TodayDATE

0.99+

HarnessORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

over $35,000QUANTITY

0.99+

todayDATE

0.99+

Harness IOORGANIZATION

0.99+

first partQUANTITY

0.99+

four piecesQUANTITY

0.99+

GitHubORGANIZATION

0.98+

oneQUANTITY

0.98+

second fiddleQUANTITY

0.98+

two artifactsQUANTITY

0.98+

one stepQUANTITY

0.97+

one changeQUANTITY

0.94+

OneQUANTITY

0.91+

EC2TITLE

0.88+

every single timeQUANTITY

0.86+

DroneTITLE

0.84+

EKSCOMMERCIAL_ITEM

0.83+

once a dayQUANTITY

0.82+

HarnessPERSON

0.81+

last 10 yearsDATE

0.79+

one placeQUANTITY

0.78+

Startup ShowcaseEVENT

0.74+

one specific locationQUANTITY

0.73+

single partQUANTITY

0.71+

The Cube'sTITLE

0.66+

last nine yearsDATE

0.65+

single electrical deviceQUANTITY

0.64+

Cube Startup ShowcaseEVENT

0.62+

phase zeroQUANTITY

0.62+

doubleQUANTITY

0.58+

customerQUANTITY

0.55+

AzureORGANIZATION

0.54+

EKSORGANIZATION

0.46+

Jyoti Bansal, Harness | CUBE Conversation


 

>>mhm >>Welcome to this cube conversation here in Palo alto California. I'm john Kerry host of the cube. We've got a great awesome conversation with the Ceo and co founder of harness a hot startup jodi Benson who is the co founder and Ceo but also the co founder of unusual ventures which is a really awesome venture capital firm, doing some great work investment but also they have great content over there for entrepreneurs and for people in the community And of course he's also the founder of big labs, his playground. If you're building out new applications also well known for being the founder of Ap dynamics of super successful billion dollar exit as a startup, Salto, Cisco now doing a lot of things and driving harness, solving big problems. So joe t mouthful intro there, you've done a lot. Congratulations on your an amazing entrepreneur career and now your next uh next next opportunities harness among other things. So congratulations. Thank you for coming. >>Thank you john and glad to be here. >>You guys are solving a big problem in software delivery. Obviously software changing the world. You're seeing open source projects increasing in order of magnitude enterprises jumping on open source in general adoption, large scale with cloud software is being delivered faster than ever before and with cloud scale and now edge this huge challenges around how software deployed, managed maintained. You got, we're talking about space to how do you do break fix in space, all these things are happening at a massive scale across the world. You are solving a big problem. So take a minute to explain what harnesses doing, why you guys exist, why you jumping in into this venture. >>Sure. Yeah. You know what harness mission is to simplify supper delivery and make it uh top notch for everyone. Like if you look at like you know the likes of google and facebook and netflix and amazon these companies are mastered the process of software delivery like and your engineers write code and the code is shipped to the end users and they can do it like multiple times a day at their scale and you know at the complexity that they have but most other business in the world they all want to be software companies but it's extremely, extremely hard for them to get there and I saw this firsthand when I was at epidemics as you know as Ceo last there we're about 12 1300 employees in the company and we had about about 3 50 or so engineers in the company For every 10 or 12 engineers, we had one person whose job was to write automation and scripting and tooling for trying to ships off you know uh you know all kind of scripting kind of stuff. We'll write scripts and chef and puppet and sensible and to deploy in aws and whatnot. And you know one day we're doing the math were like you know we have you know about overall about 30 people whose job was to do devops engineering by writing automation etc to deploy somewhere and I would do the math like you know, one engineer cost is 200 k loaded cost at six million a year that you're spending six million a year just writing deployment, scripting, you know, and even with that we were nowhere close to world class like world class is in like what you would think you could ship every day, we chip on demand, you could, you know, you could deploy software, ship software all of that right? And that was the, you know, I looked at that as a problem inside of dynamics and all they have done with customers, I would talk to like large banks, insurance companies and retailers and telcos and I would hear the same challenge like you know, we hear about devops, we go to the all these devops conferences and events and we see the same 10 companies, you know presenting how the home grew some kind of a devops system for software delivery etc. And you know, I mean that was like, you know, we just, we cannot survive with this like and as the world we need to have uh the right kind of platforms for software delivery and simplify this so that everyone could become as good as a google netflix amazon etcetera that stand of our mission at harness that can we take every business in the world, you know and in a few weeks or a few months, can we get them as sophisticated and good in terms of their dueling for software delivery as a google facebook amazon, those kind of companies would be and that's, that's what we're doing. So >>It's a great ambition and by the way it's a bold move and it's needed. I'll tell you, it's interesting. You mentioned some of those commentary about shipping code at that speed Facebook Google. They had that they had they were forced to do that and again they have all that benefit the mainstream enterprise doesn't. But if you even go back 20 years ago, 15 years ago, that's when Amazon was born. You see two and S three is celebrating their 15th birthday. Software. Yeah, hyper scale has had some good moves there. But the average business went from craft, you know, waterfall QA department go back a little bit slower. I won't say slow motion but manageable now with the speed of shipping and the speed of the scale, that's a huge issue. What kind of pressure do you see that putting on the developer, the individual, not just the system because you got the system of development and the devil and the developers themselves. >>I think the developers have have done quite well to this. I feel like, you know, if you look at the software development part of itself, you know the agile development has been happening for quite some time. So developers have learned how to ship things fast and like in a week sprint or a two week sprint or in in kind of faster cycles. They have moved off from the waterfall kind of models like many years ago now. So that's the suffering development side of things then you have the infrastructure side of things which is the like any province in infrastructure fast. Can you get hardware fast? That's the, you know, the cloud has done that well where the challenges the process, the developers are writing code fast enough these days and you have the, you know, the infrastructure itself could be prov isn't and maintained and and and change fast enough but how do you bring it all together and there is the entire process around it. That's not moving fast enough. So that's where the bottom language. So I feel the, you know, and the process is not good. The developer experience becomes really bad bad because developers are waiting for the process to go and you know, they write some code and the code is sitting on the shelf and they are waiting for things. >>Uh they get all pissed off and mad. What's the holdup? Why what's the process? And then security shifting left, wait a minute to go back and rewrite code. This is huge. I want to just get back and just nail it quickly if you don't mind honing in on the value proposition. What is the harness value proposition? What is the pitch, what are you, what are you offering? What are you solving? Can you nail in on that real quick? >>Sure. So what harness is swallowing is simplifying that software delivery by plane, so developer writes code and that code goes goes through a bunch of steps so a bunch of steps which is uh you know you build the code then you you know test the code, you know, then you do integration tests, then you you know go through your security checks, then you go through a compliance checks, then you go through more dusting, then you're deploying a staging environment, then you go one to do a bunch of things on it. Then you start deploying in production environment but in production you will deploy on like a small part of production, verify everything is working well, it's not working well, you'll roll it back, it's working well then you deploy two more things. This entire process could take like weeks for people to do and this is mostly automated, you know in kind of uh uh you know this kind of random scripts here and there etcetera. So we simplify the entire process that you could describe your process in the language, I just described like you know in a very descriptive declarative kind of way like this is the process I want to achieve and hardness will automatically create your pipelines for this. This kind of process and most of these pipelines have a lot of heavy use of intelligence and um L two, it could go from one step to another, like, so many times, like when you say, you know, deploy the guard and and and 1% of my production environment and see everything is working well and if everything is working well, go to the next 10%. But how do you figure out if everything is working well and that's where the intelligence and um El comes in like, you know, what we learn, what is a normal behavior of your application, how does a normal part of the code works like, you know, there, what's the performance behavior, what is a functional behavior? What errors it is? And if everything is good then you go to the next step so that entire cycle harness automatically, uh you know, uh managers and its automated, you know, if you get governance, you get like, you know, high degree of automation, you get a high degree of, you know, security, you get high degree of like, you know, uh uh you know, quality around him. And so it's it's think of like the, the Ci cd has a lot of developers know and know this process is is ci cd on steroids available to you, Right? So you >>sound like you're making it easier on the Ci cd pipeline process, standing it up, detecting it, prototyping it, if you will, for lack of a better description, get get used to the pipeline and then move it out, roll it out and build your own in a way >>that, is that what is that what you're doing? It's like, you know, a lot of these complex ci city pipelines, what people need, you know, it can take them like three months, six months to to put it uh you know, put it together the harness, it's like an hour, an hour, you could put it together, you know, very, very sophisticated uh Ci cd pipeline and the pipeline is, you know, automated is is, you know, it's it's intelligent around like, you know, what is the normal behavior of your of your applications? Uh It's it's just so phenomenally different than how people have done ci cd before that we simplify the process. Automate the process, you know, and make it manageable and very ready to get involved. >>It's funny you mentioned the three weeks weeks it could take to do the csd pipeline. Of course, that doesn't factor in the what happens when you roll it out, people start complaining, playing with it, breaking it, then you gotta go back and do it again. I mean, that's real and that's a real problem, I mean, can you just going to give a taste of the scar tissue that goes on there. What's some of the what are some of the what some of the pain points that you solve? >>Yeah. So, I think the that is that really becomes the core of the pain point, like, you know, people need, like high amount of dependability, easy to change things, you know, it's we call it like the lack of intelligent automation, you know, and the and this heavy amount of developer toil that the developers have to do so much work around around making all of this work like you know it has to be simplified. So that's that's where our value product comes in like you know, it's it's you know uh you can get like a visual builder and like minutes you can build out the entire process which is your job stability at city pipeline or you could also do like a declarative Yamil interface and just like you know in a few lines just right up whatever process you would want and we would review should be shipped with all kind of integrations with every cloud environment, every monitoring system, every system, every kind of testing process, every kind of security scanning so you can just drag and drop and in minutes eur, europe and running, it just creates so much velocity in this entire process. And also this manageability that people have struggled with >>morale to I mean you can imagine the morale developers go up significantly when you start seeing that the developer productivity has always been a big thing but this intelligent automation conversations huge. Some people have it, some people don't, people say they have it, what is how can you, how can the company figure out uh if someone's really got the real deal when it comes to intelligent automation because again, automation is the is key into devops. >>Yeah, I think I I almost started like you know like if you look at the generational evolution of things like the the first generation was uh you know developer writes code and then it will give you will give it to some some mighty at men who will go and deploy the code, run some commands and do things like tradition to was writing scripts that you're right, a lot of scripts that was automation but it was kind of dumb our dimension and that's how we have, you know that that's where the industry is so actually break now even most of it, the third generation is when the automation is you don't write scripts to you know uh to automate things, you tell our system what you want to achieve and it generates automation for you, right? And that's what we call intelligent automation. Where it's all declarative and all the you don't have to maintain a lot of you know scripts etcetera because they are, you know, they can't keep up with it. You know, you have to change the process all the time and if you change the process, it doesn't work, it becomes completely, you know, uh you know, it becomes very fragile to manage it. So that's that's really where intelligent automation comes in, you know, I look at like, you know, if you can have like uh like you look at like a wrestler, you know, making cars the entire assembly line is automated, but it's, but it's if you want to change something in the assembly line, even that process is automated and it's very simple. Right? So it's and that's what gives them so much uh you know, uh you know, uh let's say control and manageability around the manufacturing process. So the software delivery, uh you know, by assembly line, which is the software software by ci cd piper and really should be a more sophisticated and more intelligent as well now. And that's that's an exhibition, >>jodi. You're also pointing out something that we cover a lot on the cube and we've been writing about is how modern software practices are changing, where this team makeup or whatever its speed is key, but also getting data. Everyone who's successful with cloud and cloud scale and now you got the edge opening up and like I said, even space is going to be programmable, Everything's programmable. And the key is to get the data from the use cases right, get something deployed, look at it, get some data and then double down and make it better. That's a modern approach, not build it and then rebuild it and tear it down and rebuild it, which you're kind of leaning into this idea of let's get some delivery going, let's structure it and then feed it more so that the developers can iterate with with, with the pipeline and this is this again, can scale, can you talk about that? Can you comment on your reaction to that? >>Yeah, definitely. That's exactly how we look at it. Like, you know, you uh you want developers to kind of like say they want to do a, you know, automated process to deploy in their communities infrastructure in matter of minutes, you should be able to get started, but now it's like, you know, there's so much data that comes into it. Like, you know that you have monitoring systems systems like ab dynamics and you're like and data dog and you're logging systems your Splunk and elastic and you know, some logic, you have your, you know, different kind of testing systems here, your security scanning, so there's so much data in it. They're like, you know, terabytes and terabytes of data from it. So when you start doing your deployments, we could also come seem all of the data and see like what was the impact of those deployments or court changes in each of these monitoring, dusting, logging gonna systems and you know, what, how the data changes and then now is that based on that we can learn like, you know, what should be your ideal process and what will break in your process and that's that's the how harness platform works. That's the core of that intelligent automation networks, they're expanding it now to bring a few more of the devops use cases into it Also like the one is cloud cost management because when you, when you, you know, uh you know when we started shipping, there's a lot of people would tell us like, you know, you're you're doing a great job helping us managing the quality, which we always were concerned about like when we're deploying things so you know, security, you know, functionality etcetera. But cloud cost is a big challenge as well. You have your paying like tens and tens of millions of dollars to the cloud providers. And when developers do things in an automated way, it could increase without cost suddenly and we don't know what to do how to manage that. So that's the, you know, we we introduced a new model called cloud cost management to as part of the develops software delivery process that every time you're shipping code and we also figure out like, you know, what with impact on on your on your podcast, you know, can we automate the, you know, uh if there is there is too much impact, can we automate the, you know, the roll back around it, you know, can you get and you can you can we stop the delivery process at that point, can we help you troubleshoot and, you know, reduce the cost down? So that's, you know, that's cost becomes another another another dimension to it. Uh you know, then we recently just added uh you know, the next level that's managing feature Flags. And a lot of the time software developers are adding feature flags to like this feature would be given to this consumer and like, you know, and this feature will be given to this consumer until you test it out through uh test kind of thing and like, you know, what is the impact of, you know, uh turning a feature on versus off, you know, we're bringing that into the same ci cd pipeline. So it's kind of an integrated approach to this uh you know, our intelligently automated biplane instead of these uh small point approaches that just very hard to manage. >>I mean the level of data involved the creature flag for instance, the great is an amazing thing because that allows you to do things that used to be extremely difficult to provision. I mean just picking the color of icon, for instance, this kind of blue, I mean I was just, you hear about this, these kinds of things happening at scale and the date is pretty accurate when it comes in. So I think that's an example of the kind of speed and agility that developers want and the question I want to ask you though on that point because this opens up the whole next conversation, you guys have a modern approach and so much traction and you've recently raised big rounds of funding as you go to the market place, your experienced entrepreneur and uh and Ceo you've seen the waves before. What's the big wave that you're on now? What's the big momentum tailwind for harness? Is it the fact that you're creating value for developers or is it the system that you're integrating into with the intelligence to make things smarter and more scalable? What's the or is it all the above? Can you just share what that that story is? >>Yeah, I think it's, it's, it's really, really both of them. But you know, what are our business case when you go to people who tell them like say, if you're you know, 200 developers. uh, you know, we can give you the world's best software delivery tooling at the cost of half to one developer. Right? So like, you know, so which is like 44, 200 person organization at like 200 to 200 to $300,000 a year. They will get the best software delivery tooling better than a Google Facebook Amazon kind of companies very, very quickly. So our, our entire value prop is built on that like a developer experience gets much better. The productivity gets much better. Developers on an average are spending like 20-30% of the time on deployment, delivery-related toil, like unnecessary stuff that we deal with. So it's only 30% more efficiency gain for the developers. Their quality of life gets better that they don't need to worry about like weekends and nights to babysit your deployments and you know, things breaking and troubleshooting things all the time. Right? So that's that's a that's a big big value. But as a business you get much more velocity your innovation velocity is much higher. You know your risk on your, you know your consumers is much lower because your quality of the of of you know how your ship becomes becomes better. So our business case of like you know at the past of like 1-2 develops engineers will get you the best develops uh you know tooling in the world possible. You know it's not a hard business case for us to make, right? That's that's what we we we look at, it becomes pretty pretty obvious for you know as people try our product, you know the business case >>you don't have to really pass the I. Q. Test to figure this one out, okay everyone's happier and you have more options to scale and make more money in new opportunities not just existing business. I mean the feature flagging these new features you can build a new value and take more territory if you're a business or whatever your objective is so clear value. Can you give an example of some recent successes you've had or or traction points that you think is worth notable that people can get their arms around. >>Yeah definitely like you know we are we're helping a lot of uh you know a lot of customers you know doing uh like completely changing their uh their uh their process of software delivery, you know, 11 recent example, uh nationwide insurance, you know, nationwide insurance, you know, moving from their data center kind of approach to public cloud and to communities and to microservices, like a major cloud native re architecture and in a very ambitious aggressive project to do it, you know, in a in a in a short period of time and harness becomes a platform for them to kind of, you know, uh to remove all the bottom leg around the process, the software delivery process. You know, they obviously they still have to do the developer side of things and they have to do the cloud infrastructure side of things, which is they're doing. But the entire process of how you bring together, you know, harness becomes accelerated around it. So a lot of these kind of stories that we when we kind of create this fundamental transformation for our for our for our customers, you know, uh you know, moving to to a public cloud, you know, moving to microservices, moving to communities, you know, re architect things, but they become much faster. Cloud native higher, you know, a true software company and you know, I would say that's that's something we we we we take a they can take a lot of pride in, I think are always our biggest challenge is uh is to is to is to evangelize and and convince the market that this is possible to do with the product, because historically people have got told like, you know, the only way you can do this kind of software delivery processes and tooling is by engineering it on your own. So everyone wants us on the path of writing their own, you know, and and it's very hard for every, every company in the world to become very good in writing your own software delivery, tooling and processes and systems, etcetera. Right? So it's uh and that's it. So, you know, there is still that that education and evangelism needs to be done, that, you know, there is uh there is no point, you're trying to do it on your own, you can get a platform that can do it all for you and you can focus on the your core business of, you know, what you want to innovate on. >>And I think the Devil's movement hasn't been pioneered and you have to hand roll everything and that's the way it was. But now, as the mainstream market picks this up, you're standing on the shoulders of those pioneers, you are one of them. It's awesome to see this modern approach because it's really playing out in real time again, you've done that before, joe t so it's impressive and, you know, you've seen the movie and developed and the earlier versions pre devops. So, so as cloud native comes and start scaling it's going to be for the rest of us. So, great, great that you're providing the platform and the tools and software. I got to ask you if you don't mind because a lot of people are looking at ways for modern approaches to organizing their teams, how would you define the modern devops movement? You look at devops one point. Oh, we got here. Okay, cloud, cloud native, cloud scale, modern applications, pipe lining. Now, we're looking at a whole another level of confluence of uh of integration and speed. How would you define the modern devops movement? >>Yeah, I think that's a that's a very good question. I think that the core of modern devops, what I would call it develops to point to me is developers self service. It was like the first generation of develops was they create this kind of a devoPS team and then the developers will give all the, you know, delivery related stuff that develops team and the devops team starts to become a bottle, like everywhere now, like in the developed steam job is to build a ci pipeline and the city pipeline and the deployment scripts and you know, do like, you know, you want to do a canary deployment, they have to figure it out how to do it, they have to do, like, you know, you are uh you know, all sort of things that the that needs to be done, you create a central develops team and you give it to them and they become like, you know, uh become a big bottleneck, we look at the modern develops or the next generation and develops has to be done around focusing on the developer experience that and making it all self service for the developers. So you have, you have, let's say you are definitely in for a micro service and it's like, you know 57 engineers, you know, modeling a micro service you want like that, they can go and say this is for our micro service, you know, in a matter of minutes or hours, they can engineer the process without having to lean on a central deVOPS team and to do all the work for them and that's you know, by by maybe a modeler or in some kind of mammal interface or something. That's very easy for them, their experience is so easy that they can manage it themselves without the central deVOPS team have to write it all or cut it all and manage it all. But at the same time the center deVOPS teams, job becomes a bar and governance that can they define the guardrails, that they can define the guardrails on like, you know, you have to have this level of security before something goes into production, you have to have this level of quality before something goes into production, you have to have like, you know, uh this, your cost could not be more than this, right? So you define, so in this instance, instead of the center develops team is doing all the work themselves on writing all the stuff they define the guard rails and it becomes a very easy cell service experience of the developers should do things within those, those guard rails. This is what the modern never actually, >>that's awesome and also accelerate more business value And you're nailing it joe t thank you for coming on and great. Uh, the Ceo on the cube ceo and co founder harness harness dot IO. You guys got free trials, free downloads. You got a great, uh, by as you go model also. Um, you're an entrepreneur at heart. Uh, co founder of unusual ventures, Big Labs appdynamics. Now harness. Congratulations. Thanks for coming on. >>Hey, thank you john. >>Okay, this is a cube conversation. I'm john for here in Palo alto California with the cube. Thanks for watching.

Published Date : Sep 7 2021

SUMMARY :

Thank you for coming. why you guys exist, why you jumping in into this venture. And you know, I mean that was like, you know, we just, we cannot survive with this like and as the world we need to the individual, not just the system because you got the system of development and the process to go and you know, they write some code and the code is sitting on the shelf and they are waiting for things. I want to just get back and just nail it quickly if you don't mind honing in on the value proposition. uh you know, uh managers and its automated, you know, if you get governance, what people need, you know, it can take them like three months, six months to to put it uh you know, that doesn't factor in the what happens when you roll it out, people start complaining, So that's that's where our value product comes in like you know, it's it's you morale to I mean you can imagine the morale developers go up significantly when you start seeing that uh you know, uh you know, uh let's say control and manageability around the manufacturing Everyone who's successful with cloud and cloud scale and now you got the edge opening the roll back around it, you know, can you get and you can you can we stop the delivery process at that point, of the kind of speed and agility that developers want and the question I want to ask you though uh, you know, we can give you the world's best I mean the feature flagging these new features you can build a new value and take more territory if you're a business you know, uh you know, moving to to a public cloud, you know, moving to microservices, I got to ask you if you don't mind pipeline and the deployment scripts and you know, do like, you know, you want to do a canary deployment, You got a great, uh, by as you go model I'm john for here in Palo alto California with the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jyoti BansalPERSON

0.99+

CiscoORGANIZATION

0.99+

200 kQUANTITY

0.99+

six monthsQUANTITY

0.99+

john KerryPERSON

0.99+

jodi BensonPERSON

0.99+

AmazonORGANIZATION

0.99+

amazonORGANIZATION

0.99+

johnPERSON

0.99+

1%QUANTITY

0.99+

three monthsQUANTITY

0.99+

tensQUANTITY

0.99+

12 engineersQUANTITY

0.99+

netflixORGANIZATION

0.99+

SaltoORGANIZATION

0.99+

10 companiesQUANTITY

0.99+

57 engineersQUANTITY

0.99+

200 developersQUANTITY

0.99+

first generationQUANTITY

0.99+

facebookORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

bothQUANTITY

0.99+

googleORGANIZATION

0.99+

third generationQUANTITY

0.99+

two weekQUANTITY

0.99+

15 years agoDATE

0.99+

200QUANTITY

0.99+

20 years agoDATE

0.99+

Palo alto CaliforniaLOCATION

0.99+

two more thingsQUANTITY

0.99+

an hourQUANTITY

0.99+

GoogleORGANIZATION

0.98+

halfQUANTITY

0.98+

europeLOCATION

0.98+

44, 200 personQUANTITY

0.98+

30%QUANTITY

0.98+

one personQUANTITY

0.98+

about 3 50QUANTITY

0.98+

six million a yearQUANTITY

0.98+

10%QUANTITY

0.98+

15th birthdayQUANTITY

0.97+

20-30%QUANTITY

0.97+

CeoORGANIZATION

0.97+

about 12 1300 employeesQUANTITY

0.96+

one stepQUANTITY

0.96+

billion dollarQUANTITY

0.95+

aboutQUANTITY

0.95+

a weekQUANTITY

0.95+

one engineerQUANTITY

0.94+

one pointQUANTITY

0.93+

about 30 peopleQUANTITY

0.93+

wavesEVENT

0.92+

$300,000 a yearQUANTITY

0.9+

three weeks weeksQUANTITY

0.9+

tens of millions of dollarsQUANTITY

0.89+

CeoPERSON

0.87+

eachQUANTITY

0.86+

oneQUANTITY

0.85+

agileTITLE

0.85+

terabytesQUANTITY

0.85+

10QUANTITY

0.84+

YamilORGANIZATION

0.8+

Big Labs appdynamicsORGANIZATION

0.77+

one developerQUANTITY

0.76+

big waveEVENT

0.72+

11 recent exampleQUANTITY

0.71+

a minuteQUANTITY

0.7+

ApORGANIZATION

0.68+

years agoDATE

0.67+

a dayQUANTITY

0.63+

S threeORGANIZATION

0.62+

twoQUANTITY

0.62+

HarnessPERSON

0.54+

IOORGANIZATION

0.51+

monthsQUANTITY

0.49+

ceoORGANIZATION

0.45+

deVOPSORGANIZATION

0.41+

twoCOMMERCIAL_ITEM

0.37+

Sean Knapp, Ascend.io | CUBE Conversation


 

>>Mhm >>Hello and welcome to this special cube conversation. I'm john furrier here in Palo alto California, host of the cube we're here with Sean Knapp was the Ceo and founder of Ascend dot Io heavily venture backed working on some really cool challenges and solving some big problems around scale data and creating value in a very easy way and companies are struggling to continue to evolve and re factor their business now that they've been re platform with the cloud, you're seeing a lot of new things happening. So Sean great to have you on and and thanks for coming on. >>Thanks for having me john So >>one of the things I've been interesting with your company, not only do you have great pedigree in terms of investors and tech tech staff is that you guys are going after this kind of new scaling challenge um which is not your classic kind of talking points around cloud scale, you know, more servers, more more data more. It's a little bit different. Can you describe what you guys mean around this new scaling challenge? >>Absolutely. The classic sense of scaling, particularly when it comes to the data industry, whether it's big data data science, data engineering has always focused on bits and bytes, how many servers, how big your clusters are and You know, we've watched over the last 5-10 years and those kinds of scaling problems while not entirely solved for most companies are largely solved problems now and the new challenge that is emerging is not how do you store more data or how do you process more data but it's how do you create more data products, how do you drive more value from data? And the challenge that we see many companies today, really struggling to tackle is that data productivity, that data velocity challenge and that's more people problem. It is a how do you get more people able to build more products faster and safely that propelled the business forward? >>You know, that's an interesting topic, We talk about devops and how devops is evolving. Um and you're seeing SRS has become a standard position now in companies site reliability engineers at Google pioneered, which essentially the devops person, but now that you don't need to have a full devops team as you get more automation, That's a big, big part of it. I want to get into that because you're touching on some scale issues around people, the relationships to the machines and the data. It's it's an interesting conversation, but before we do that, can you just take a minute to explain uh what you guys do, what does this send? I o I know you're in Palo alto, it's where I live um and our offices here, what's a sandy all about? >>Absolutely. So what ascend really focuses on is building the software stack on top of modern day, big data infrastructure for data engineers, data scientists, data analyst to self serve and create active data pipelines that feel the rest of their business. Uh And we provide this as a service to a variety of different companies from Australia to Italy finance to IOT uh start ups to large enterprises and really hope elevate their teams, you know, as Bezos said a long time ago, out of the muck of of the underlying infrastructure, we help them do the same thing out of the muck of classic data engineering work, >>that's awesome Andy Jassy now the ceo of amazon who was the sea of avenue too many times over the years and he always has the line undifferentiated heavy lifting. Well, I mean data is actually differentiated and it's also heavy lifting too, but you got, you have differentiation with data but it's super important, it's really you gotta but there's a lot of it now, so there's a lot of heavy lifting, this is where people are struggling, I want to get your thoughts on this because you have an opinion on this around how teams are formed, how teams can scale because we know scales coming on the data side and there's different solutions, you've got data bricks, you've got snowflake yet red shift, there's a zillion other opportunities for companies to deploy data tooling and platforms. >>What's your hands to the >>changes in data? >>Well, I think in the data ecosystem is we're changing very, very quickly uh which makes it for a very exciting industry uh and I do think that we are in this great cycle of continuing to reinvest higher and higher up the stack if you will. Right and in many ways we want to keep elevating our teams or partners or customers or companies out of the non differentiated elements. Uh and this is one of those areas where we see tremendous innovation happening from amazon from data breaks from snowflake, who are solving many of these underlying infrastructure, storage processing and even some application layer challenges proteins. And what we find oftentimes is that teams after having adopted some of these stacks on some of these solutions, then have to start solving the problem of how do we build after, how do we build better? And how do we produce more on top of these incredibly valuable investments that we've made and they're looking for acceleration. There's they're looking for in many ways the autopilot self driving level of capabilities, intelligence to sit on top and help them actually get the most out of these underlying systems. And that's really where we need that big changes >>are self driving data, you gotta have the products first. I think you mentioned earlier a data product data being products, but there's a trend with this idea of data products. Data apps. What is the data product? Um that's a new concept. I mean it's not most, most people really can't get their arms around that because it's kind of new data data, but how how does it become product ties and and how do why is it, why is it growing so fast? >>Yeah, that's a great question. I think, you know, quickly uh talked through a lot of the evolution of the industry. Oftentimes we started with the, well let's just get the data inside of a lake and it was a very autumns up notion of what we just collected then we'll go do something with it. The very field of dreams esque approach. Right? And oftentimes they didn't come in and your data just sat there and became a swamp. Right? And the when we think about a data, product oriented model of building it is let's focus on the how do we just collect and store and process data and it's much more on the business value side of how do we create a new data set in architectural models would be how do we launch a new micro service or a new feature out to a customer? But the data product is a new refined, valuable curated live set of data that can be used by the business. Whether it's for data analysts or data scientists are all the way out to end consumers. It is very heavily oriented towards that piece because that's really where we get to deliver value for our end users or customers. Yeah, >>getting that data fastest key Again, I love this idea of data becoming programmable or kind of a data ops kind of vibe where you're seeing data products that can be nurtured also scaled up to with people as as this continues The next kind of logical question I have for you is okay, I get the data products now I have teams of people, how do I deploy them? How do the teams change? Because now you have low code and no code capabilities and you have some front end tools that make it easier to create new apps and, and um products where data can feed into someone discovers a cool new value metric in the company. Um they can say here boss is a new new metric that we've identified that drives our business now, they've got a product ties that in the app, they used low code, no code. Where do you guys see this going? Because you can almost see a whole, another persona of a developer emerging >>or engine. Team >>emerging. >>Absolutely. And you know, it's, I think this is one of the challenges is when we look at the data ecosystem. Uh we even ran a survey a couple of months ago across hundreds of different developers asking data scientists, data engineers, data analyst about the overall productivity of their teams. And what we found was 96% of teams are at or over capacity, meaning only 4% of teams even have the capacity to start to invest in better tools or better skill sets and most are really under the gun. And what that means is teams and companies are looking for more people with different skill sets, how and frankly how they get more leverage out of the folks where they have, so they spend less than any more than building. And so what ends up starting to happen is this introduction of low code and no conclusions to help broaden the pool of people who can contribute to this. And what we find oftentimes is there's a bit of a standoff happening between engineering teams and analyst teams and data science teams, teams where some people want low code, some people want no code, Some people just want super high code all day all all the time and what we're finding is and even actually part of one of the surveys that we ran, uh, most users very small percentage less than 10% users actually were amenable to no code solutions, But more than 70% were amenable to solutions that leaned towards lower no code but allowed them to still programs in a language of their choice, give them more leverage. So what we see end up happening is really this new era of what we describe as flex code where it doesn't have to be just low code or just no code but teams can actually plug in at different layers of the staff and different abstract layers and contribute side by side with each other all towards the creation of this data product with applicable model of flats code. >>So let's unpack flex code for a second. You don't mind to first define what you mean by flex code and then talk about the implications to to the teams because it sounds like it's it's integrated but yet decoupled at layers. So can you take me through what it is and then let's unpack a little bit >>Absolutely. You know, fuck. So it is really a methodology that of course companies like ours will will go and product ties. But is that the belief structure that you should be able to peel back layers and contribute to an architecture in this case a data architecture, whether it's through building in a no code interface or by writing some low code in sequel or down and actually running lower level systems and languages and it's it's become so critical and key in the data ecosystem. As what classically happened has been the well if we need to go deeper into the stack, we need to customize more of how we run this one particular data job, you end up then throwing away most of the benefits and the adoption of any of these other code and tools. End up shutting off a lot of the rest of the company from contributing. And you then have to be for example, it really advanced scholar developer who understands how to extend doctor runtime environment uh, to contribute. And the reality is you probably want a few of those folks on your team and you do want them contributing, but you still want the data analysts and the data scientists and the software engineers able to contribute at higher levels of the stack, all building that solution together. So it becomes this hybrid architecture >>and I love I met because it's really good exploration here because so what you're saying is it's not that low code and no codes inadequate. It's just that the evolution of the market is such that as people start writing more code, things kind of break down stream. You gotta pull the expert in to kind of fix the plumbing and lower levels of the stack, so to speak, the more higher end systems oriented kind of components. So that's just an evolution of the market. So you're saying flex code is the next level of innovation around product sizing that in an architecture. So you don't waste someone's time to get yanked in to solve a problem just to fix something that's working or broke at this point. So if it works, it breaks. So, you know, it's working that people are coding with no code and low code, it just breaks something else downstream, You're fixing >>that. Absolutely. And that's the um, the idea of being here is, you know, it's one of these old averages. Uh, when you're selling out to customers, we see this and I remember this head of engineering one time I told me, well, you may make 95% of my team's job easier. But if you make the last 5% impossible, it is a non starter. And so a lot of this comes down to the how do we make that 95% of the team's job far easier. But when you really have to go do that one ultra advanced customized thing, how do we make sure you still get all the benefits of Oftentimes through a low code or no code interface, but you can still go back down and really tune and optimize that one piece. >>Yeah, that's really kind of, I mean this is really an architectural decision because that's the classic. You don't want to foreclose the future options. Right? So as a developer, you need to think this is really where you have to make an architecture decision That's really requires you guys to lean into that architectural team. How do you guys do that? What those conversations look like? Is it work with a send and we got you covered or how does those conversations go? Because if someone swinging low code, no code, they might not even know that they're foreclosing that 5%. >>Yeah. Oftentimes the, you know, for them, they're uh, they're the ones that are given the hardest radius gnarliest problems to solve for um, and may not uh even have the visibility that there is a team of 30, you know analysts who can go right incredible data pipelines if they are still afforded a low code or no code interface on top. And so, you know, for us, we really partner heavily with our customers and our users. Uh we do a ton of joint architecture, design decisions, not just for their products, but we actually bring them in to all of our architecture and design and road mapping sessions as well. Uh And we do a lot of collaborative building very much how we treat the developer community around the company. It's all we spent a lot of time on that >>part of your partner strategy. You're building the bridge to the future with the customer. >>Yeah, absolutely. We we work, in fact, almost all of our communications with our customers happen in shared slack channels. We are treated like extensions of our customers team and we treat them as our internal customers as well. >>And that's the way, and that's the way it should be doing some great work, is really cutting edge and really setting the table for, you know, a decade of innovation with the customer if you get it right, if they get if they get it right. So I gotta ask you with this um architecture, you gotta be factoring in automation because orchestration automation. These are the principles of devops to kind of go on the next level. I love this, love this conversation. Devops two point oh four point or whatever you wanna call it. It's the next level. Devops, it's data automation, you're taking it to a whole nother level within your sphere. Talk about automation and how that factors in obviously benefits the automation. Autonomous data pipeline? It would be Cool. No coding. I can see maintenance is an issue. How do you offload developers so that it's not only an easy button but it's a maintenance maintenance button? >>Yeah, absolutely. What we find in the evolution of most technical domains is this shit happened at some point usually towards her from an imperative developer model to a declared developer model. For example, we see this uh in databases with the introduction of sequel, we see it in infrastructure definition with tools like a telephone and now kubernetes and what we do from an automation perspective for uh for data pipelines is very similar to what Kubernetes does for containers? We do for data pipelines, we introduce a declarative model and put in this incredible intelligence that tracks everything around how data moves. Uh for us, metadata alone is a big data problem because we track so much information and all that goes into this central brain that is dynamically adapting to code and data for our users and dynamically generated. So for us, when we look at the biggest potential to automate is to help alleviate maintenance and optimization burdens for users. So they get to spend more time building and less time maintaining and that really goes into the how do you have this central brain that tracks everything that build this really deep understanding of how data moves through an organization. >>That's an awesome vision. I gotta ask my my brains firing off like, okay, so what about runtime assembly as you orchestrate data in real time, you have to kind of pull the assembly to all and link and load all this data together. I can only imagine how hard that is. Right? So can you share your vision because you mentioned docker containers, the benefits of containers is, you know, they can manage state and stateless data. So as you get into this notion of state and stateless uh data, how do you assemble it all in real time? How does that work? How's that brain figured out? What's the secret sauce? >>Yeah, that's a really great question. Uh you know, for us and this is one of the most exciting parts for our customers in our users is uh we hope with this paradigm shift where the classic model has been the you're writing code, you compile it, you ship it, you push it out and then you cross your fingers like, gosh, I really hope that works. Um and it's a very slow iteration cycle. And one of the things that we've been able to do because of this intelligence layer is actually help hybridize that for users. You still have pipelines and they still run and they're still optimizing but we make it an interactive experience at the same time very similar to how notebooks for data science. Help make that such an interactive experience. We make the process of building data pipelines and doing data engineering work iterative and interactive. You're getting instantaneous feedback and evolving very quickly. So they the things that used to take weeks or months due to slow iteration cycles really now can be done in hours or days because you get such fast feedback loops as you build. >>Well, we definitely need your product. We have so much data on the media side all these events are like little, it's like little data but it's big data, it's a lot of little data that makes it a big data problem. And I do, I feel like I'm jumping out of the airplane with a parachute and will it open, you know, one of the >>work you just we >>don't, you know, we don't know right? So a lot of the fear is you know, split, we don't wanna crater and build data products that are you know, praying right? This is this is really kind of everyone's doing right now. It's kind of state of the industry. How do you guys make it easy? That's the question, right. Because you brought up the human aspect, which I love the human scale, the scale teams, nobody wants another project if they are already burnt out with Covid and they don't have enough resources, you know, it's almost like there's a there's a little bit of psychology going on the human mind now saying well enough or burn out or you know, the relationship to humans training data data is now got this human interaction, all of it is around, you know, these are views future of work and simplicity and self service, What's your thoughts on those? >>Oh, I wholeheartedly agree. I think the uh we need to continue to be pushing those boundaries around self service and around developer and frankly just outright data productivity, You know, and for us, I think it's become a really fascinating uh time in the industry, as uh you know, I would say in 2019, much of the industry and users and builders in the industry, I just embrace the fact that frankly the building data pipeline sucked. Uh and it was a badge of honor because it was such a hard and painful thing yet, what we're finding is now as the industry is evolving is an expectation that it should be easier. Uh and people are challenging that conventional wisdom and expecting building data pipelines to be much easier and that's really where we come in is both with a flex code model and with high levels of automation to keep people squarely focused on rapid building versus maintaining and tinkering to deepen the staff. >>You know, I really think you're on to something with the one that scaling challenge of people and teams huge issue to match that at the pace of, you know, cloud and data scale is a huge, huge focus and I'm glad you're focusing on that, that's a human issue and then on the data architecture? I mean we saw what to do, how to do a failed project? You require the customer to create all this, you know, undifferentiated support and heavy lifting and, and time lag just to get to value right? There is no value right in the cloud. So, so this is your on the right track. How do you talk to customers, take a minute to, to share at the folks who are watching or if it's a customer and enterprise or potential customer, what's in it for them? Why ascend why should they work with you? How do they engage with you guys? What's in it for them? >>Yeah, absolutely. Um, what's in it for customers is time to value truncated dramatically. You get projects live and you get them faster, far faster than ever thought possible. Uh, you know, the way that we engage with our customers, uh, is we help partner them with them, We launched them on the, on the application. They can buy us from the marketplace, we will actually help even architect their first project with them, uh, and ensure that they have a full fledged live data product. Data products live within the first four weeks. Uh, not really, I think becomes the most keeping and frankly it doesn't features and functions and so on really don't matter. Ultimately, at the end of day, what really matters is can you get your data products live, Can you deliver business value and are your your team happy as they get to go build. Do they do they smile more throughout the day because they're enjoying that devil over experience. >>So you're providing the services to get them going. It's the old classic expression teaching them how to fish and then they can fish on their own, Is that right? >>Yep. Absolutely. >>And then doing whatever next next damn thing. Yeah >>and then then the, we're excited to watch quarter after quarter year after year our customers build more and more data products uh and their teams are growing faster than most of the other teams in their companies because they're delivering so much value and that's what's so exciting, >>you know, you know the cliche every company is a data company. I know that's kind of cliche but it's true right? Everyone has to have a core D. N. A. But they don't have, they shouldn't have to hire hardcore data engineering. They haven't data team for sure. That team has to create a service model for practitioners inside the company. >>Well how do they agree >>Sean great, great conversation. Um great to unpack the flex code. I love that approach, take it to the next level, take it low code to the next level with data. Great stuff and send I. O Palo Alto based company, congratulations on your success. >>Thank you so much, john >>okay this cube conversation here in Palo Alto, I'm john for your host of the cube. Thanks for watching. Mhm. Mhm.

Published Date : Sep 7 2021

SUMMARY :

So Sean great to have you on and and thanks for coming on. one of the things I've been interesting with your company, not only do you have great pedigree in terms of investors and tech And the challenge that we see many companies today, It's it's an interesting conversation, but before we do that, can you just take a minute to explain uh out of the muck of of the underlying infrastructure, we help them do the same thing out and it's also heavy lifting too, but you got, you have differentiation with data but it's super important, cycle of continuing to reinvest higher and higher up the stack if you will. are self driving data, you gotta have the products first. and store and process data and it's much more on the business value side of how do we create also scaled up to with people as as this continues The next kind of logical question I have for you or engine. And you know, it's, I think this is one of the challenges is when we look what you mean by flex code and then talk about the implications to to But is that the belief structure that you should be able You gotta pull the expert in to kind of fix And so a lot of this comes down to the how do we make that 95% where you have to make an architecture decision That's really requires you guys to And so, you know, You're building the bridge to the future with the customer. of our customers team and we treat them as our internal customers as well. for, you know, a decade of innovation with the customer if you get it right, if they get if they get it right. building and less time maintaining and that really goes into the how do you have this So can you share your vision because you mentioned docker containers, the benefits of containers Uh you know, for us and this is one of the most exciting parts for And I do, I feel like I'm jumping out of the airplane with a parachute and will it open, So a lot of the fear is you know, as uh you know, I would say in 2019, match that at the pace of, you know, cloud and data scale is a huge, huge focus and I'm glad at the end of day, what really matters is can you get your data products live, It's the old classic expression teaching them how to And then doing whatever next next damn thing. you know, you know the cliche every company is a data company. I love that approach, take it to the next level, take it low code to the next level with data. okay this cube conversation here in Palo Alto, I'm john for your host of the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sean KnappPERSON

0.99+

2019DATE

0.99+

96%QUANTITY

0.99+

Andy JassyPERSON

0.99+

Palo AltoLOCATION

0.99+

95%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

amazonORGANIZATION

0.99+

SeanPERSON

0.99+

Ascend dot IoORGANIZATION

0.99+

5%QUANTITY

0.99+

AustraliaLOCATION

0.99+

less than 10%QUANTITY

0.99+

more than 70%QUANTITY

0.99+

first projectQUANTITY

0.99+

oneQUANTITY

0.99+

johnPERSON

0.99+

ItalyLOCATION

0.99+

john furrierPERSON

0.99+

two pointQUANTITY

0.98+

Palo alto CaliforniaLOCATION

0.98+

first four weeksQUANTITY

0.98+

one pieceQUANTITY

0.97+

CeoORGANIZATION

0.97+

bothQUANTITY

0.97+

30QUANTITY

0.96+

firstQUANTITY

0.96+

BezosPERSON

0.96+

D. N. A.LOCATION

0.92+

hundreds of different developersQUANTITY

0.92+

todayDATE

0.91+

4%QUANTITY

0.91+

SRSORGANIZATION

0.87+

one timeQUANTITY

0.86+

couple of months agoDATE

0.83+

Ascend.ioOTHER

0.77+

Palo altoLOCATION

0.76+

four pointQUANTITY

0.73+

one particularQUANTITY

0.65+

thingsQUANTITY

0.65+

yearsDATE

0.65+

a decadeQUANTITY

0.64+

secondQUANTITY

0.62+

KubernetesORGANIZATION

0.59+

partsQUANTITY

0.55+

jobQUANTITY

0.51+

lastDATE

0.5+

CovidPERSON

0.5+

5-10QUANTITY

0.49+

Vasanth Kumar, MongoDB Principal Solutions Architect | Io-Tahoe Episode 7


 

>> Okay. We're here with Vasanth Kumar who's the Principal Solutions Architect for MongoDB. Vasanth, welcome to "theCube." >> Thanks Dave. >> Hey, listen, I feel like you were born to be an architect in technology. I mean, you've worked for big SIs, you've worked with many customers, you have experience in financial services and banking. Tell us, the audience, a little bit more about yourself, and what you're up to these days. >> Yeah. Hi, thanks for the for inviting me for this discussion. I'm based out of Bangalore, India, having around 18 years experience in IT industry, building enterprise products for different domains, verticals, finance built and enterprise banking applications, IOT platforms, digital experience solutions. Now being with MongoDB nearly two years, been working in a partner team as a principal solutions architect, especially working with ISBs to build the best practices of handling the data and embed the right database as part of their product. I also worked with technology partners to integrate the compatible technology compliance with MongoDB. And also worked with the private cloud providers to provide a database as a service. >> Got it. So, you know, I have to Vasanth, I think Mongo, you kind of nailed it. They were early on with the trends of managing unstructured data, making it really simple. There was always a developer appeal, which has lasted and then doing so with an architecture that scales out, and back in the early days when Mongo was founded, I remember those days, I mean, digital transformation, wasn't a thing, it wasn't a buzz word, but it just so happens that Mongo's approach, it dovetails very nicely with a digital business. So I wonder if you could talk about that, talk about the fit and how MongoDB thinks about accelerating digital transformation and why you're different from like a traditional RDBMS. >> Sure, exactly, yeah. You had a right understanding, let me elaborate it. So we all know that the customer expectation changes day by day, because of the business agility functionality changes, how they want to experience the applications, or in apps that changes okay. And obviously this yields to the agility of the information which transforms between the multiple systems or layers. And to achieve this, obviously the way of architecting or developing the product as completely a different shift, might be moving from the monolith to microservices or event-based architecture and so on. And obviously the database has to be opt for these environment to adopt these changes, to adopt the scale of load and the other thing. Okay. And also like we see that the common, the protocol for the information exchange is JSON, and something like you, you adopt it. The database adopts it natively to that is a perfect fit. Okay. So that's where the MongoDB fits perfectly for billing or transforming the modern applications, because it's a general purpose database which accepts the JSON as a payload and stores it in a BSON format. You don't need to be, suppose like to develop any particular application or to transfer an existing application, typically they see the what is the effort required and how much, what is the cost involved in it, and how quickly I can do that. That's main important thing without disturbing the functionality here where, since it is a multimodal database in a JSON format, you don't easily build an application. Okay? Don't need a lot of transformation in case of an RDBMS, you get the JSON payload, you transform into a tabular structure or a different format, and then probably you build an ORM layer and then map it and save it. There are lot of work involved in it. There are a lot of components need to be written in between. But in case of MongoDB, what they can do is you get the information from the multiple sources. And as is, you can put it in a DB based on where, or you can transform it based on the access patterns. And then you can store it quickly. >> Dave: Got it. And I tell Dave, because today you haven't context data, which has a selected set of information. Probably tomorrow the particular customer has more information to put it. So how do you capture that? In case of an RDBMS, you need to change the schema. Once you scheme change the schema, your application breaks down. But here it magically adopts it. Like you pass the extra information, it's open for extension. It adopts it easily. You don't need to redeploy or change the schema or do something like that. >> Right. That's the genius of Mongo. And then of course, you know, in the early days people say, oh, you know, Mongo, it won't scale. And then of course we, through the cloud. And I follow very closely Atlas. I look at the numbers every quarter. I mean, overall cloud adoption is increasing like crazy, you know, our Wiki Bon analyst team. We got the big four cloud vendors just in IAS growing beyond a 115 billion this year. That's 35% on top of, you know, 80-90 billion last year. So talk more about how MongoDB fits with the cloud and how it helps with the whole migration story. 'Cause you're killing it in that space. >> Yeah. Sure. Just to add one more point on the previous question. So for continuously, for past four to five years, we have been the number one in the wanted database. >> Dave: Right Okay. That that's how like the popularity is getting done. That's how the adoption has happened. >> Dave: Right. >> I'm coming back to your question- >> Yeah let's talk about the cloud and database as a service, you guys actually have packaged that very nicely I have to say. >> Yeah. So we have spent lot of effort and time in developing Atlas, our managed database as a service, which typically gives the customer the way of just concentrating on their application rather than maintaining and managing the whole set of database or how to scale infrastructure. All those things on work is taken care. You don't need to be an expert of DB, like when you are using an Atlas. So we provide the managed database in three major cloud providers, AWS, GCP, and Azure, and also it's a purely a multicloud, you know, like you can have a primary in AWS and you have the replicated nodes in GCP or Azure. It's a purely multicloud. So that like, you don't have a cloud blocking. You feel that, okay, your business is, I mean, if this is the right for your business you are choosing the model, you think that I need to move to GCP. You don't need to bother, you easily migrate this to GCP. Okay. No vendor lock in, no cloud lock in this particular- >> So Vasanth, maybe you could talk a little bit more about Atlas and some of the differentiable features and things that you can do with Atlas that maybe people don't know about. >> Yeah, sure Dave like, Atlas is not just a manage database as a service, you know, like it's a complete data platform and it provides many features. Like for example, you build an application and probably down the line of three years, the data which you captured three years back might be an old data. Like how do you do it? Like there's no need for you to manually purge or do thing. Like we do have an online archival where you configure the data. So that like the data, which is older than two years, just purge it. So automatically this is taken care. So that like you have hot data kept in Atlas cluster and the cold data moved up to an ARKit. And also like we have a data lake where you can run a federated queries . For example, you've done an archival, but what if people want to access the data? So with data lake, what it can do is, on a single connection, you can fire a- you can run a federated queries both on the active and the archival data. That's the beauty, like you archive the data, but still you can able to query it. And we do also have a charts where like, you can build in visualization on top of the data, what you have captured. You can build in graphs or you can build in graphs and also embed these graphs as part of your application, or you can collaborate to the customers, to the CXOs and other theme. >> Dave: Got it. >> It's a complete data platform. >> Okay. Well, speaking of data platform, let's talk about Io-Tahoe's data RPA platform, and coupling that with Mongo DB. So maybe you could help us understand how you're helping with process automation, which is a very hot topic and just this whole notion of a modern application development. >> Sure. See, the process automation is more with respect to the data and how you manage this data and what to derive and build a business process on top of it. I see there are two parts into it. Like one is the source of data. How do you identify, how do you discover the data? How do you enrich the context or transform it, give a business context to it. And then you build a business rules or act on it, and then you store the data or you derive the insights or enrich it and store it into DB. The first part is completely taken by Io-Tahoe, where you can tag the data for the multiple data sources. For example, if we take an customer 360 view, you can grab the data from multiple data sources using Io-Tahoe and you discover this data, you can tag it, you can label it and you build a view of the complete customer context, and use a realm web book and then the data is ingested back to Mongo. So that's all like more sort of like server-less fashion. You can build this particular customer 360 view for example. And just to talk about the realm I spoke, right? The realm web book, realm is a backend APA that you can create on top of the data on Mongo cluster, which is available in addclass. Okay. Then once you run, the APS are ready. Data as a service, you build it as a data as a service, and you fully secure APIs, which are available. These APS can be integrated within a mobile app or an web application to build in a built in modern application. But what left out is like, just build a UI artifacts and integrate these APIs. >> Yeah, I mean we live in this API economy companies. People throw that out as sort of a buzz phrase, but Mongo lives that. I mean, that's why developers really like the Mongo. So what's your take on DevOps? Maybe you could talk a little bit about, you know, your perspective there, how you help Devs and data engineers build faster pipelines. >> Yeah, sure. Like, okay, this is the most favorite topic. Like, no, and it's a buzzword along, like all the DevOps moving out from the traditional deployment, what I learned online. So like we do support like the deployment automation in multiple ways okay, and also provide the diagnostic under the hood. We have two options in Mongo DB. One is an enterprise option, which is more on the on-prem's version. And Atlas is more with respect to the cloud one manage database service. Okay. In case of an enterprise advanced, like we do have an Ops manager and the Kubernetes operator, like a Ops manager will manage all sort of deployment automation. Upgrades, provides your diagnostics, both with respect to the hardwares, and also with respect to the MongoDB gives you a profiling, slow running queries and what you can get a context of what's working on the data using that. I'm using an enterprise operator. You can integrate with existing Kubernetes cluster, either in a different namespace on an existing namespace. And orchestrate the deployment. And in case of Atlas, we do have an Atlas-Kubernetes operator, which helps you to integrate your Kubernetes operator. And you don't need to leave your Kubernetes. And also we have worked with the cloud providers. For example, we have we haven't cloud formation templates where you can just in one click, you can just roll out an Atlas cluster with a complete platform. So that's one, like we are continuously working, evolving on the DevOps site to roll out the might be a helm chart, or we do have an operator, which has a standard (indistinct) for different types of deployments. >> You know, some really important themes here. Obviously, anytime you talk about Mongo, simplicity comes in, automation, you know, that big, big push that Io-Tahoe was making. What you said about data context was interesting because a lot of data systems, organizations, they lack context and context is very important. So auto classification and things like that. And the other thing you said about federated queries I think fits very well into the trend toward decentralized data architecture. So very important there. And of course, hybridisity. I call it hybridisity. On-prem, cloud, abstracting that complexity away and allowing people to really focus on their digital transformations. I tell ya, Vasanth, it's great stuff. It's always a pleasure chatting with Io-Tahoe partners, and really getting into the tech with folks like yourself. So thanks so much for coming on theCube. >> Thanks. Thanks, Dave. Thanks for having a nice discussion with you. >> Okay. Stay right there. We've got one more quick session that you don't want to miss.

Published Date : Aug 10 2021

SUMMARY :

Okay. We're here with Vasanth Kumar you have experience in of handling the data and and back in the early days And then you can store it quickly. So how do you capture that? And then of course, you know, on the previous question. That's how the adoption has happened. you guys actually have So that like, you don't So Vasanth, maybe you could talk the data which you So maybe you could help us and then you store the data little bit about, you know, and what you can get a context And the other thing you discussion with you. that you don't want to miss.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Vasanth KumarPERSON

0.99+

MongoORGANIZATION

0.99+

DavePERSON

0.99+

two partsQUANTITY

0.99+

VasanthPERSON

0.99+

35%QUANTITY

0.99+

tomorrowDATE

0.99+

last yearDATE

0.99+

115 billionQUANTITY

0.99+

first partQUANTITY

0.99+

Bangalore, IndiaLOCATION

0.99+

three yearsQUANTITY

0.99+

JSONTITLE

0.99+

AWSORGANIZATION

0.99+

Io-TahoeORGANIZATION

0.99+

80-90 billionQUANTITY

0.99+

MongoDBORGANIZATION

0.99+

ARKitTITLE

0.98+

two optionsQUANTITY

0.98+

one clickQUANTITY

0.98+

todayDATE

0.98+

AtlasTITLE

0.98+

this yearDATE

0.98+

bothQUANTITY

0.97+

older than two yearsQUANTITY

0.97+

around 18 yearsQUANTITY

0.96+

OneQUANTITY

0.96+

nearly two yearsQUANTITY

0.96+

AzureTITLE

0.96+

Wiki BonORGANIZATION

0.96+

MongoDBTITLE

0.95+

three years backDATE

0.94+

VasanthORGANIZATION

0.93+

Io-TahoeTITLE

0.92+

DevOpsTITLE

0.91+

AtlasORGANIZATION

0.91+

KubernetesTITLE

0.91+

360 viewQUANTITY

0.89+

oneQUANTITY

0.89+

single connectionQUANTITY

0.88+

five yearsQUANTITY

0.85+

one more quick sessionQUANTITY

0.83+

GCPORGANIZATION

0.83+

four cloud vendorsQUANTITY

0.82+

GCPTITLE

0.79+

three major cloud providersQUANTITY

0.76+

one more pointQUANTITY

0.73+

IoTITLE

0.72+

AzureORGANIZATION

0.72+

-TahoeORGANIZATION

0.68+

fourQUANTITY

0.67+

Mongo DBTITLE

0.65+

APATITLE

0.58+

ISBsORGANIZATION

0.54+

Yusef Khan


 

(gentle music) >> From around the globe, it's theCUBE, presenting Building Immersive Customer Experiences with Customer Data 360. Brought to you by Io-Tahoe. >> Hello everyone and welcome back to Io-Tahoe's seventh installment of their Data Automation series, Building Immersive Customer Experiences with Customer Data 360. Now in this first segment, we're to catch up with Yusef Khan, who is Io-Tahoe's Head of Data Services. Yusef, always great to see you. Welcome back to theCUBE. >> Thank you, Dave. It's great to be back. Thank you for having me. >> Our pleasure. So let's talk about Customer Data 360. What does that actually mean in terms of the data? Give us a little background here. >> Well, Dave, we're living in a world now, where customer expectations are really, really high. A world in which the customer ethos if you like, is almost, talk to me like you love me. And that attitude is pretty common. So it's a world in which if you've shared your data with an organization, you absolutely expect that organization, that company to optimize your experience using that data. And when it comes to data, these very high expectations can be challenging to meet and there are several reasons for that. I mean, to mention just a few, an enterprise can have many different diverse data sources. It can have customer records that are duplicated or incomplete, the data quality itself can be poor, and what Customer Data 360 does, is help enterprises understand their data states, get more insight on their customer base, improve data quality, and then ultimately improve their customer experience and bring it in line with the expectation of today's customers. >> Great. Thank you for that. Well, so maybe not love me, but at least know me, right? So, poor data quality, and I think we can all relate to this. Like, you call a service provider, they either have old data, or bad data, you sometimes get double billed and it's up to you to figure that out. So, can the 360 degree view help with this problem? How so? What data does it generate to address this? >> Yeah, absolutely. It can help. So Customer Data 360 allows organizations to produce a fundamentally more personalized experience for customers. It helps eliminate the often generic sales pitches people get on email or in social media ads. It helps curate recommendations that add genuine value to that specific customer. So for example, if you typically buy three products from a certain brand every month, that data is going to be tracked, saved for the future, and it will make the next month's shopping more convenient by suggesting the same products or complementary products. Not only that, Customer Data 360 will track purchases across all touch points, and understand the customer in the round. So across in store, online, mobile app, tracking all those patterns. Same time, all your data is kept secure and private, and it's only used in ways that you expect it to be used. >> Well, to me, this is really, really important. I mean, especially after this year, we've seen online purchases go through the roof. (chuckles) Every time I buy something, I get an ad for that something, then for the next week, until I turn it off. I mean, it's clear that the state of data still has a way to go based on the quality and so you're addressing that, but take us through the process of identifying for instance, incorrect data or duplicate customer data. How do you do that? >> Well, Dave customer data changes so frequently. So for example, people get married and there are name changes. People move homes, so the address changes. Emails change or get updated, people change phones or phone numbers. The list goes on. Customer Data 360 identifies records that probably belong to the same customer, and offers a unified view of the customer for insights and for campaigns. It also offers a single household view, hoping to link together data from customers based at the same address. And then finally, it gives a datum, a data target operating model, to help drive continuous improvement through the enterprise. This means it helps embed the right process and culture with the organization's people, as well as the technology. >> So Yusef, just a quick aside, if I may. So essentially, I presume you're using some kind of machine intelligence which we've talked about before, to infer from, triangulate different data points and identify the probability that this individual is the same person, right? And then making that call. >> Yeah. Using machine learning and algorithms, you're able to do this much more quickly, much more effectively, much more cost-effectively than doing it via manual methods. Sometimes using manual methods, it's not really possible to do this type of work. So absolutely, there is a technological core backend that enables this work. >> Yeah, the manual just doesn't scale and humans just frankly aren't that good at it. So besides incorrect customer data, what other kinds of challenges are companies facing, and how are you addressing those? >> There are lots of different challenges. The data quality itself may be poor, so you've got the classic, "I've got the wrong address for that customer or the wrong email address", and that can happen multiple times over if you've got multiple records for each customer as well. The customer age might not be there, can be quite critical for streaming and other online services, so who's really a child and who's an adult? That can be very, very key for consent and things like that. Data relationships and data lineage may be unclear. Updating one system may not flow through into another system. Marketing and other permissions may not be captured correctly, and even sensitive data, PII, Personally Identifiable Information may be spread through the enterprise with no real understanding of where it is. And finally, there are cultural factors, like individual functions may jealously guard their own database, they may not share data in a way that's collaborative or useful for the whole enterprise. >> Great. Thank you for that. So, the big picture is this is going to drop right to my bottom line. I mean, if I'm sending duplicate communications, physical flyers, snail mail to the same household, people are just tossing it, they get frustrated. Or if I'm unknowingly giving minors access to restricted information, we've seen horror shows like that before, if that happens, you're going to lose customers, you're going to lose money. We all know the cost of losing customers is much, much higher (chuckles) than getting them. You have to get them back, forget it. It's three, four times X, what it originally cost. Where is Io-Tahoe going, to address this and remediate these problems? >> Well, Customer Data 360 really starts by understanding and fixing the fundamentals. So it starts by helping the customer understand their data estate, mapping the data relationships and the data lineage, automatically populating a data catalog so the customer knows what they have in terms of data, automatically assessing data quality, and recommending how it can be improved, automatically analyzing data record duplication and data source redundancy, and the customer can then get to a single view of the customer and the household as we said, this is enabled by the data target operating model which embeds this process and drives continuous improvement. The enterprise can then deploy raw data for analytics, model building, data science, can then productionize those models and related pipelines, and use them to start pushing out relevant messages and offers to customers. Obviously then, you capture the results. You use those to refine the offering and continuously improve, win customers, win friends, influence people, and grow revenue times a thousand. >> So, I've got to ask you another aside, if I may. I mean, we've talked about this in previous episodes. A lot of this, correct me if I'm wrong, you've got data source issues as well. I mean, you may not know that the address has changed but there may be other data sources that you can ingest that where the address has changed and you can bring that into your platform, but oftentimes, organizations don't want to do that. They don't want to add the data source, it's too complex, it adds more data quality issues, so it's a challenge somewhat. So, I'm just kind of connecting the dots from previous conversations that we've had. You know, we're at number seven now, but I can start to see this coming together. Maybe you could comment on that data source challenge. >> Yeah, absolutely. Organizations often have, I suppose you could call it dark data or data that they don't know that they have. So it does partly start with going back to the fundamentals of what data do you hold, rationalizing that data, using automated processes and machine learning to do that so you can do it more rapidly and effectively, getting them to a single view of the customer, and then using that in all the ways that advanced analytics and data science give you these days to get to a better customer experience and a better customer outcome. But as you say, a lot of that starts with identifying your data sources and understanding your data sources in the first place. >> Well, I've been watching you guys, your progress since COVID began and you're making some good moves here, Yusef and always great to catch up. I really appreciate your time and insights. >> Thank you, Dave. Nice to speak to you. Thanks for having me. >> Our pleasure. Okay, don't go away folks. Up next, we've got Ajay Vohora. He's the CEO of Io-Tahoe, and he's going to be joined by Mongo DB's principal solutions architect, talking through how to build modern apps using data RPA. Keep it right there, be right back. (gentle music)

Published Date : Jun 22 2021

SUMMARY :

Brought to you by Io-Tahoe. Yusef, always great to see you. It's great to be back. mean in terms of the data? is almost, talk to me like you love me. and it's up to you to figure that out. that data is going to be tracked, I mean, it's clear that the state of data that probably belong to the same customer, and identify the probability to do this type of work. and how are you addressing those? and that can happen multiple times over this is going to drop and offers to customers. and you can bring that into your platform, and then using that in all the ways and always great to catch up. Nice to speak to you. and he's going to be joined

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

YusefPERSON

0.99+

Yusef KhanPERSON

0.99+

Ajay VohoraPERSON

0.99+

Io-TahoeORGANIZATION

0.99+

threeQUANTITY

0.99+

singleQUANTITY

0.99+

next weekDATE

0.98+

first segmentQUANTITY

0.98+

this yearDATE

0.98+

three productsQUANTITY

0.98+

each customerQUANTITY

0.98+

360 degreeQUANTITY

0.98+

next monthDATE

0.97+

Mongo DBORGANIZATION

0.96+

seventh installmentQUANTITY

0.96+

theCUBEORGANIZATION

0.95+

todayDATE

0.95+

Customer Data 360ORGANIZATION

0.92+

firstQUANTITY

0.92+

four timesQUANTITY

0.89+

one systemQUANTITY

0.87+

single viewQUANTITY

0.84+

a thousandQUANTITY

0.82+

360ORGANIZATION

0.79+

COVIDORGANIZATION

0.75+

single householdQUANTITY

0.67+

doubleQUANTITY

0.65+

Io-TahoePERSON

0.54+

number sevenQUANTITY

0.51+

360QUANTITY

0.5+

James Labocki, Red Hat & Ruchir Puri, IBM | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>>from around the globe. It's the cube with coverage of Kublai >>Khan and Cloud Native Con, Europe 2021 >>virtual brought to you by red hat. The cloud Native >>computing foundation >>and ecosystem partners. >>Welcome back to the cubes coverage everyone of Coop Con 2021 Cloud Native Con 21 virtual europe. I'm john for your host of the cube. We've got two great guests here, James Labaki, senior Director of Product management, Red Hat and Richer Puree. IBM fellow and chief scientist at IBM Gentlemen, thanks for coming on the cube, appreciate it. >>Thank you for having us. >>So, um, got an IBM fellow and Chief scientist, Senior Director Product management. You guys have the keys to the kingdom on cloud Native. All right, it's gonna be fun. So let's just jump into it. So I want to ask you before we get into some of the questions around the projects, what you guys take of cube con this year, in terms of the vibe, I know it's virtual in europe north America, we looked like we might be in person but this year with the pandemic cloud native just seems to have a spring to its step, it's got more traction. I've seen the cloud native piece even more than kubernetes in a way. So scott cooper diseases continues to have traction, but it's always about kubernetes now. It's more cloud native. I what do you guys think about that? >>Yeah, I'm sure you have thoughts and I could add on >>Yes, I I think well I would really think of it as almost sequential in some ways. Community is too cold now there's a layer which comes above it which is where all our, you know, clients and enterprises realize the value, which is when the applications really move. It's about the applications and what they can deliver to their end customers. And the game now is really about moving those applications and making them cloud native. That's when the value of that software infrastructure will get realized and that's why you are seeing that vibe in the, in the clients and enterprises and at two corners. Well, >>yeah, I mean, I think it's exciting. I've been covering this community since the beginning as you guys know the cube. This is the enablement moment where the fruit is coming off the tree is starting to see that first wave of you mentioned that enablement, it's happening and you can see it in the project. So I want to get into the news here, the conveyor community. What is this about? Can you take a minute to explain what is the conveyor community? >>Yeah, yeah. I think uh, you know, uh, what, what we discovered is we were starting to work with a lot of end users and practitioners. Is that what we're finding is that they kind of get tired of hearing about digital transformation and from multiple vendors and and from sales folks and these sorts of things. And when you speak to the practitioners, they just want to know what are the practical implications of moving towards a more collaborative architecture. And so, um, you know, when you start talking to them at levels beyond, uh, just generic kind of, you know, I would say marketing speak and even the business cases, the developers and sys admins need to know what it is they need to do to their application architecture is the ways they're working for to successfully modernize their applications. And so the idea behind the conveyor community was really kind of two fold. One was to help with knowledge sharing. So we started running meetups where people can come and share their knowledge of what they've done around specific topics like strangling monoliths or carving offside containers or things that sidecar containers are things that they've done successfully uh to help uh kind of move things forward. So it's really about knowledge sharing. And then the second piece we discovered was that there's really no place where you can find open source tools to help you re host re platform and re factor your applications to kubernetes. And so that's really where we're trying to fill that void is provide open source options in that space and kind of inviting everybody else to collaborate with us on that. >>Can you give an example of something uh some use cases of people doing this, why the need the drivers? It makes sense. Right. As a growing, you've got, you have to move applications. People want to have um applications moved to communities. I get that. But what are some of the use cases that were forcing this? >>Yeah, absolutely, for sure. I don't know if you have any you want to touch on um specifically I could add on as well. >>Yeah, I think some of the key use cases, I would really say it will be. So let let me just, I think James just talked about re host, re hosting, re platform ng and re factoring, I'm gonna put some numbers on it and then they talk about the use case a little bit as well. I would really say 30 virtual machines movement. That's it. That's the first one to happen. Easy, easier one, relatively speaking. But that's the first one to happen. The re platform in one where you are now really sort of changing the stack as well but not changing the application in any major way yet. And the hardest one happened around re factoring, which is, you are, you know, this is when we start talking about cloud native, you take a monolithic application which you know legacy applications which have been running for a long time and try to re factor them so that you can build microservices out of them. The very first, I would say set of clients that we are seeing at the leading edge around this will be around banking and insurance. Legacy applications, banking is obviously finances a large industry and that's the first movement you start seeing which is where the complexity of the application in terms of some of the legacy code that you are seeing more onto the, into the cloud. That for a cloud native implementation as well as their as well as a diversity of scenarios from a re hosting and re platform ng point of view. And we'll talk about some of the tools that we are putting in the community uh to help the users and uh and the developer community in many of these enterprises uh move into a cloud native implementation lot of their applications. And also from the point of view of helping them in terms of practice, is what I describe as best practices. It is not just about tools, it's about the community coming together. How do I do this? How do I do that? Actually, there are best practices that we as a community have gathered. It's about that sharing as well, James. >>Yeah, I think you hit the nail on the head. Right. So you re hosting like for example, you might have uh an application that was delivered, you buy an SV that is not available containerized yet. You need to bring that over as a VM. So you can bring that into Q Bert, you know, and actually bring that and just re hosted. You can, you might have some things that you've already containerized but they're sitting on a container orchestration layer that is no longer growing, right? So the innovation has kind of left that platform and kind of kubernetes has become kind of that standard one, the container orchestration layer, if you want become the de facto standard. And so you want to re platform that that takes massaging and transforming metadata to do that to create the right objects and so on and so forth. So there's a bunch of different use cases around that that kind of fall into that re host tree platform all the way up to re factoring >>So just explain for the audience and I know I love I love the three things re hosting re platform in and re factoring what's the difference between re platform NG and re factoring specifically, what's the nuance there? >>Yeah, yeah, so so a lot of times I think people have a lot of people, you know, I think obviously amazon kind of popularized the six hours framework years ago, you know, with, with, with, with that. And so if you look at what they kind of what they popularize it was replied corn is really kind of like a lift tinker and shift. So maybe it's, I, I'm not just taking my VM and putting it on new infrastructure, I'm gonna take my VM, maybe put on new infrastructure, but I'm gonna switch my observer until like a lighter weight observer or something like that at the same time. So that would fall into like a re platform or in the case, you know, one of the things we're seeing pretty heavily right now is the move from cloud foundry to kubernetes for example, where people are looking to take their application and actually transform it and run it on kubernetes, which requires you to really kind of re platform as well. And re factoring >>is what specific I get the >>report re factoring is, I think just following on to what James said re factoring is really about um the complexity of the application, which was mainly a monolithic large application, many of these legacy applications which have so many times, actually hundreds of millions of dollars of assets for these uh these enterprises, it's about taking the code and re factoring it in terms of dividing it into uh huh different pieces of court which can themselves be spun as microservices. So then it becomes true, it takes starting advantage of agility or development in a cloud native environment as well. It's not just about either lift and shift of the VM or or lift tinker and shift from a, from a staff point of view. It's really about not taking applications and dividing them so that we can spin microservices and it has the identity of the development of a cloud. >>I totally got a great clarification, really want to get that out there because re platform ng is really a good thing to go to the cloud. Hey, I got reticent open source, I'll use that, I can do this over here and then if we use that vendor over there, use open source over there. Really good way to look at it. I like the factory, it's like a complete re architecture or re factoring if you will. So thank you for the clarification. Great, great topic. Uh, this is what practitioners think about. So I gotta ask the next question, what projects are involved in in the community that you guys are working? It seems like a really valuable service uh and group. Um can you give an overview and what's going on in the community specifically? >>Yeah, so there's really right now, there's kind of five projects that are in the community and they're all in different, I would say different stages of maturity as well. So, um there's uh when you look at re hosting, there's two kind of primary projects focused on that. One is called forklift, which is about migrating your virtual machines into cuba. So covert is a way that you can run virtual machines orchestrated by kubernetes. We're seeing kind of a growth in demand there where people want to have a common orchestration for both their VMS and containers running on bare metal. And so forklift helps you actually mass migrate VMS into that environment. Um The second one on the re hosting side is called Crane. So Crane is really a tool that helps you migrate applications between kubernetes clusters. So you imagine you have all your you know, you might have persistent data and one kubernetes cluster and you want to migrate a name space from one cluster to another. Um That's where Crane comes in and actually helps you migrate between those um on the re platforms that we have moved to cube, which actually came from the IBM research team. So they actually open source that uh you sure you want to speak about uh moved to >>cube. Yeah, so so moved to cuba is really as we discuss the re platform scenario already, it is about, you know, if you are in a docker environment or hungry environment uh and you know, kubernetes has become a de facto standard now you are containerized already, but you really are actually moving into the communities based environment as the name implies, It's about moved to cuba back to me and this is one of the things we were looking at and as we were looking, talking to a lot of, a lot of users, it became evident to us that they are adapting now the de facto standard. Uh and it's a tool that helps you enable your applications in that new environment and and move to the new stuff. >>Yeah. And then the the the only other to our tackle which is uh probably like the one of the newest projects which is focused on kind of assessment and analysis of applications for container reservation. So actually looking at and understanding what the suitability is of an application for being containerized and start to be like being re factored into containers. Um and that's that's uh, you know, we have kind of engineers across both uh Red hat IBM research as well as uh some folks externally that are starting to become interested in that project as well. Um and the last, the last project is called Polaris, which is a tool to help you measure your software delivery performance. So this might seem a little odd to have in the community. But when you think about re hosting re platform and re factoring, the idea is that you want to measure your software delivery performance on top of kubernetes and that's what this does. It kind of measures the door metrics. If you're familiar with devops realization metrics. Um so things like, you know, uh you know, your change failure rate and other things on top of their to see are you actually improving as you're making these changes? >>Great. Let me ask the question for the folks watching or anyone interested, how do they get involved? Who can contribute, explain how people get involved? Is our site, is there up location slack channel? What's out there? >>Yeah, yeah, all of the above. So we have a, we have, we have a slack channel, we're on slack dot kubernetes dot io on town conveyor, but if you go to www dot conveyor dot io conveyor with a K. Uh, not like the cube with a C. Uh, but like cube with a K. Uh, they can go to a conveyor to Ohio and um, there they can find everything they need. So, um, we have a, you know, a governance model that's getting put in place, contributor ladder, all the things you'd expect. We're kind of talking into the C N C F around the gap delivery groups to kind of understand if we can um, how we can align ourselves so that in the future of these projects take off, they can become kind of sandbox projects. Um and uh yeah, we would welcome any and all kind of contribution and collaboration >>for sure. I don't know if you have >>anything to add on that, I >>think you covered it at the point has already um, just to put a plug in for uh we have already been having meetups, so on the best practices you will find the community, um, not just on convert or die. Oh, but as you start joining the community and those of meet ups and the help you can get whether on the slack channel, very helpful on the day to day problems that you are encountering as you are taking your applications to a cloud native environment. >>So, and I can see this being a big interest enterprises as they have a mix and match environment and with container as you can bring and integrate old legacy. And that's the beautiful thing about hybrid cloud that I find fascinating right now is that with all the goodness of stade Coubertin and cloud native, if you've got a legacy environments, great fit now. So you don't have to kill the old to bring in the news. So this is gonna be everything a real popular project for, you know, the class, what I call the classic enterprise, So what you guys both have your companies participated in. So with that is that the goal is that the gulf of this community is to reach out to the classic enterprise or open source because certainly and users are coming in like, like, like you read about, I mean they're coming in fast into the community. >>What's the goal for the community really is to provide assistant and help and guidance to the users from a community point of view. It's not just from us whether it is red hat or are ideal research, but it's really enterprises start participating and we're already seeing that interest from the enterprises because there was a big gap in this area, a lot of vendor. Exactly when you start on this journey, there will be 100 people who will be telling you all you have to do is this Yeah, that's easy. All you have to do. I know there is a red flag goes up, >>it's easy just go cloud native all the way everything is a service. It's just so easy. Just you know, just now I was going to brian gracefully, you get right on that. I want to just quickly town tangent here, brian grazer whose product strategist at red hat, you're gonna like this because he's like, look at the cloud native pieces expanding because um, the enterprises now are, are in there and they're doing good work before you saw projects like envoy come from the hyper scales like lift and you know, the big companies who are building their own stuff, so you start to see that transition, it's no longer the debate on open source and kubernetes and cloud native. It's the discussion is integration legacy. So this is the big discussion this week. Do you guys agree with that? And what would, what would be your reaction? >>Yeah, no, I, I agree with you. Right. I mean, I think, you know, I think that the stat you always here is that the 1st 20 of kind of cloud happened and now there's all the rest of it. Right? And, and modernization is going to be the big piece right? You have to be able to modernize those applications and those workloads and you know, they're, I think they're gonna fall in three key buckets, right? Re host free platform re factor and dependent on your business justification and you know, your needs, you're going to choose one of those paths and we just want to be able to provide open tools and a community based approach to those folks too to help that certainly will have and just, you know, just like it always does, you know, upstream first and then we'll have enterprise versions of these migration tool kits based on these projects, but you know, we really do want to kind of build them, you know, and make sure we have the best solution to the problem, which we believe community is the way to do that. >>And I think just to add to what James said, typically we are talking about enterprises, these enterprises will have thousands of applications, so we're not talking about 10 40 number. We're talking thousands or 20% is not a small number is still 233 400. But man, the work is remaining and that's why they are getting excited about cloud negative now, okay, now we have seen the benefit but this little bit here, but now, let's get, you know serious about about that transformation and this is about helping them in a cloud native uh in an open source way, which is what red hat. XL Sad. Let's bring the community together. >>I'm actually doing a story on that. You brought that up with thousands of applications because I think it's, it's under underestimate, I think it's going to be 1000s and thousands more because businesses now, software driven everywhere and observe ability has pointed this out. And I was talking to the founder of uh Ravana project and it's like, how many thousands of dashboards you're gonna need? Roads are So so this is again, this is the problems and the opportunities are coming together, the abstraction will get you to move up the stack in terms of automation. So it's kind of fascinating when you start thinking about the impact as this goes the next level. And so I have to ask your roaches since you're an IBM fellow and chief scientist, which by the way, is a huge distinction. Congratulations. Being an IBM fellow is is a big deal. Uh IBM takes that very seriously. Only a few of them. You've seen many waves and cycles of innovation. How would you categorize this one now? Because maybe I'm getting old and and loving this right now. But this seems like everything kind of coming together in one flash 10.1 major inflection point. All the other waves combined seemed to be like in this one movement very fast. What's your what's your take on this wave that we're in? >>Yes, I would really say there is a lot of technology has been developed but that technology needs to have its value unleashed and that's exactly where the intersection of those applications and that technology occurs. Um I'm gonna put in yet another. You talked about everything becoming software. This was Anderson I think uh Jack Lee said the software is eating the world another you know, another wave that has started as a i eating software as well. And I do believe these two will go inside uh to uh like let me just give you a brief example re factoring how you take your application and smart ways of using ai to be able to recommend the right microservices for you is another one that we've been working towards and some of those capabilities will actually come in this community as well. So when we talk about innovations in this area, We are we are bringing together the best of IBM research as well. As we are hoping the community actually uh joints as well and enterprises are already starting to join to bring together the latest of the innovations bringing their applications and the best practices together to unleash that value of the technology in moving the rest of that 80%. And to be able to seamlessly bridge from my legacy environment to the cloud native environment. >>Yeah. And hybrid cloud is gonna be multi cloud really is the backbone and operating system of business and life society. So as these apps start to come on a P i is an integration, all of these things are coming together. So um yeah, this conveyor project and conveyor community looks like a really strong approach. Congratulations. Good >>job bob. >>Yeah, great stuff. Kubernetes, enabling companies is enabling all kinds of value here in the cube. We're bringing it to you with two experts. Uh, James Richard, thanks for coming on the Cuban sharing. Thank you. >>Thank you. >>Okay, cube con and cloud native coverage. I'm john furry with the cube. Thanks for watching. Yeah.

Published Date : May 7 2021

SUMMARY :

It's the cube with coverage of Kublai virtual brought to you by red hat. IBM fellow and chief scientist at IBM Gentlemen, thanks for coming on the cube, So I want to ask you before we get into some of the questions around the layer which comes above it which is where all our, you know, This is the enablement moment where the fruit is coming off the tree is starting to see that first wave of you mentioned And so, um, you know, when you start talking to them at levels beyond, Can you give an example of something uh some use cases of people doing this, I don't know if you have any you want to touch on um specifically I could add on as well. complexity of the application in terms of some of the legacy code that you are seeing more the container orchestration layer, if you want become the de facto standard. of popularized the six hours framework years ago, you know, with, with, with, with that. It's not just about either lift and shift of the VM or or lift tinker and in the community that you guys are working? So you imagine you have all your you know, uh and you know, kubernetes has become a de facto standard now you are containerized already, hosting re platform and re factoring, the idea is that you want to measure your software delivery performance on Let me ask the question for the folks watching or anyone interested, how do they get involved? So, um, we have a, you know, a governance model I don't know if you have day to day problems that you are encountering as you are taking your applications to a for, you know, the class, what I call the classic enterprise, So what you guys both have your companies participated Exactly when you start on this journey, there will be 100 people who will be telling you all you have and you know, the big companies who are building their own stuff, so you start to see that transition, I mean, I think, you know, I think that the stat you always here is that And I think just to add to what James said, typically we are talking about the abstraction will get you to move up the stack in terms of automation. uh like let me just give you a brief example re factoring how you take So as these apps start to come on a P We're bringing it to you with two experts. I'm john furry with the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
James LabakiPERSON

0.99+

JamesPERSON

0.99+

IBMORGANIZATION

0.99+

1000sQUANTITY

0.99+

OhioLOCATION

0.99+

James RichardPERSON

0.99+

thousandsQUANTITY

0.99+

James LabockiPERSON

0.99+

Red HatORGANIZATION

0.99+

Jack LeePERSON

0.99+

two expertsQUANTITY

0.99+

cubaLOCATION

0.99+

100 peopleQUANTITY

0.99+

second pieceQUANTITY

0.99+

amazonORGANIZATION

0.99+

80%QUANTITY

0.99+

20%QUANTITY

0.99+

five projectsQUANTITY

0.99+

OneQUANTITY

0.99+

233 400OTHER

0.99+

30 virtual machinesQUANTITY

0.99+

CraneTITLE

0.99+

AndersonPERSON

0.99+

this yearDATE

0.98+

firstQUANTITY

0.98+

first oneQUANTITY

0.98+

two kindQUANTITY

0.98+

brian grazerPERSON

0.98+

thousands of applicationsQUANTITY

0.98+

EuropeLOCATION

0.98+

bothQUANTITY

0.98+

hundreds of millions of dollarsQUANTITY

0.97+

twoQUANTITY

0.97+

this weekDATE

0.97+

two cornersQUANTITY

0.97+

two great guestsQUANTITY

0.97+

johnPERSON

0.97+

red hatORGANIZATION

0.96+

KubeConEVENT

0.96+

oneQUANTITY

0.95+

second oneQUANTITY

0.95+

IBM GentlemenORGANIZATION

0.94+

three thingsQUANTITY

0.93+

Cloud Native ConEVENT

0.91+

brianPERSON

0.91+

CubanOTHER

0.9+

Ruchir PuriPERSON

0.89+

one movementQUANTITY

0.88+

KublaiPERSON

0.88+

one clusterQUANTITY

0.87+

europeLOCATION

0.87+

1st 20QUANTITY

0.83+

first movementQUANTITY

0.83+

Coop Con 2021 Cloud Native Con 21 virtualEVENT

0.82+

slackORGANIZATION

0.81+

2021DATE

0.81+

CloudNativeCon EuropeEVENT

0.81+

europe north AmericaLOCATION

0.8+

pandemic cloudEVENT

0.77+

PureeORGANIZATION

0.77+

10.1QUANTITY

0.77+

about 10 40QUANTITY

0.76+

slack channelORGANIZATION

0.73+

a lot of usersQUANTITY

0.73+

dot conveyor dot ioORGANIZATION

0.71+

two foldQUANTITY

0.71+

bobPERSON

0.69+

years agoDATE

0.67+

QPERSON

0.67+

KubernetesTITLE

0.66+

Cheryl Hung and Katie Gamanji, CNCF | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>>from around the globe. >>It's the cube with coverage of Kublai khan and cloud Native >>Con, Europe 2021 Virtual >>brought to you by >>red hat, cloud >>Native Computing foundation >>and ecosystem partners. >>Welcome back to the cubes coverage of coupon 21 cloud native con 21 part of the C N C s annual event this year. It's Virtual. Again, I'm john Kerry host of the cube and we have two great guests from the C N C. F. Cheryl Hung VP of ecosystems and Katie Manji who's the ecosystem advocate for C N C F. Thanks for coming on. Great to see you. I wish we were in person soon, maybe in the fall. Cheryl Katie, thanks for coming on. >>Um, definitely hoping to be back in person again soon, but john great to see you and great to be back on the >>cube. You know, I have to say one of the things that really surprised me is the resilience of the community around what's been happening with the virtual in the covid. Actually, a lot of people have been, um, you know, disrupted by this, but you know, the consensus is that developers have used to been working remotely and virtually in a home and so not too much disruption, but a hell of a lot of productivity. You're seeing a lot more cloud native, um, projects, you're seeing a lot more mainstreaming and the enterprise, you're starting to see cloud growth, just a really kind of nice growth. And we've been saying for years, rising tide floats, all boats, Cheryl, but this year you're starting to see real mainstream adoption with cloud native and this has really been part of the work of the community you guys have done. So what's your take on this? Because we're going to be coming out of this Covid pretty soon. There's a post covid light at the end of the tunnel. What's your view? >>Yeah, definitely, fingers crossed on that. I mean, I would love Katie to give her view on this. In fact, because she came from Conde Nast and American Express, both huge companies that were adopting have adopted cloud Native successfully. And then in the middle of the pandemic, in the middle of Covid, she joined CN CF. So Katie really has a view from the trenches and Katie would love to hear your thoughts. >>Yeah, absolutely. Uh, definitely cloud native adoption when it comes to the tooling has been more permanent in the enterprises. And that has been confirmed of my role at American Express. That is the role I moved from towards C N C F. But the more surprising thing is that we see big companies, we see banks and financial organization that are looking to adopt open source. But more importantly, they're looking for ways to either contribute or actually to direct it more into these areas. So from that perspective, I've been pretty much at the nucleus of enterprise of the adoption of cloud Native is definitely moving, it's slow paced, but it's definitely forward moving as well. Um and now I think while I'm in the role with C N C F as an ecosystem advocate and leading the end user community, there has been definitely uh the community is growing um always intrigued to find out more about the cloud Native usage is one of the things that I find quite intriguing is the fact that not one cloud native usage, like usage of covering just one platform, which is going to be called, the face is going to be the same. So it's always intriguing to find new use cases, find those extremist cases as well, that it really pushes the community forward. >>I want to do is unpack. The end user aspect of this has been a hallmark of the CNC F for years, always been a staple of the organization. But this year, more than ever it's been, seems to be prominent as people are integrating in what about the growth? I mean from last year this year and the use and user ecosystem, how have you guys seen the growth? Is there any highlights because have any stats and or observations around how the ecosystem is growing around the end user piece? >>Sure, absolutely. I mean, I can talk directly about C N C F and the C N C F. End user community, much like everything else, you know, covid kind of slowed things down, so we're kind of not entirely surprised by that, But we're still going over 2020 and in fact just in the last few months have brought in some really, really big names like Peloton, Airbnb, Citibank, um, just some incredible organizations who are, who have really adopted card native, who have seen the success and the benefits of it. And now we're looking to give back to the community, as Katie said, get involved with open source and be more than just a passive consumer of the technologies, but actually become leaders in their own right, >>Katie talk about the dynamic of developers that end user organizations. I mean, you have been there, you're now you've been on both sides of the table if you will not to the sides of the table, it's more like a round table if you will, but community driven. But traditional, uh, end user organizations, not the early adopters, not the hyper scale is, but the ones now are really embedding hybrid, um, are changing how I t to how modern applications being built. That's a big theme in these mainstream organizations. What's the dynamic going on? What's your view? >>I think for any organization, the kind of the core, what moves the organization towards cloud Native is um pretty much being ahead of your competitors. And now we have this mass of different organization of the cloud native and that's why we see more kind of ice towards this area. So um definitely in this perspective when it comes to the technology aspect, companies are looking to deploy complex application in an easier manner, especially when it comes to pushing them to production system securely faster. Um and continuously as well. They're looking to have this competitive edge when it comes to how can they quickly respond to customer feedback? And as well they're looking for this um hybrid element that has been, has been talked about. Again, we're talking about enterprise is not just about public cloud, it's about how can we run the application security and getting both an element of data centers or private cloud as well. And now we see a lot of projects which are balancing around that age but more importantly there is adoption and where there's adoption, there is a feedback loop and that's how which represents the organic growth. >>That's awesome. Cheryl like you to define what you mean when you say end user driven open source, what does that mean? >>Mm This is a really interesting dynamic that I've seen over the last couple of years. So what we see is that more and more of the open source project, our end users who who are solving their own problems and creating their own projects and donating these back to the community. An early example of this was Envoy and lift and Yeager from Uber but Spotify also recently donated backstage, which is a developer portal which has really taken off. We've also got examples from Intuit Donating Argo. Um I'm sure there are some others that I've just forgotten. But the really interesting thing I see about this is that class classically right. Maybe a few years ago, if you were an end user organization, you get involved through a vendor, you'd go to a red hat or something and say, hey, you fix this on my behalf because you know that's what I'm paying you to do. Whereas what I see now is and user saying we want to keep this expertise in house and we want to be owners of our own kind of direction and our own fate when it comes to these open source projects. And that's been a big driver for this trend of open source and user driven, open source. >>It's really the open model is just such a great thing. And I think one of the interesting thing is that fits in with a lot of people who want to work from mission driven companies, but here there's actually a business benefit as you pointed out as in terms of the dynamic of bringing stuff to the community. This is interesting. I'm sure that the ability to do more collaboration, um, either hiring or contributing kind of increases when you have this end user dynamic because that's a pretty big decision to donate and bring something into the open source. What's the playbook though? If I'm sitting in an end user organization like american express Katie or a big company, say, hey, you know, we really developed this really killer use cases niche to us, but we want to bring it to the community. What do they do? Is there like a, like a manager? Do they knock on someone's door? Zara repo is, I mean, how does someone, I mean, how does an end user get this done? >>Mm. Um, I think one of the best resources out there is called the to do group, which is a organization underneath the Linux foundation. So it's kind of a sister group to C N C F, which is about open source program offices. And how do you formalize such an open source program? Because it's pretty easy to say, oh well just put something on get hub. But that's not the end of the story, right? Um, if you want to actually build a community, if you want other people to contribute, then you do actually have to do more than just drop it and get up and walk away. So I would say that if you are an end user company and you have created something which scratches your own itch and you think other people could benefit from it then definitely come. And like you could email me, you could email Chris and chick who is the ceo of C N C F and just get in touch and sort of ask around about what are the things that you could do in terms of what you have to think about the licensing, How do you develop a community governance program, um, trademark issues, all of these things. >>It's interesting how open source is growing so much now, chris has got so much action going on. New verticals are opening up, you know, so, so much action Cheryl you had posted on the internet predictions for cloud native, which I found interesting because there's so much action going on, you have to break things out into pillars, tech devops and ecosystem, each one kind of with a slew event of key trends. So take us through the mindset, why break it out like that? You got tech devops and ecosystem tradition that was all kind of bundled in one. Why? Why the pillars? And is it because there's so much action, what's, what's the basis behind the prediction? >>Um so originally this was just a giant list of things I had seen from talking to people and reading around and seeing what people are talking about on social media. Um And when, once I invested at these 10, I thought about what, what does this actually mean for the people who are going to look at this list and what should they care about? So I see tech trends as things related to tools, frameworks. Um, perhaps architects I see develops as people who are more as a combination of process, things that a combination of process and people and culture best practices and then ecosystem was kind of anything else broader than that. Things that happened across organizations. So you can definitely go to my twitter, you can go to at boy Chevelle, O I C H E R Y L and take a look at this and This is my list of 10. I would love to hear from you whether you agree with it, whether you think there are other things that I've missed or what would your >>table. I love. I love the top. Well, first of all I think this is very relevant. The one that I would ask you on is more rust and cloud native. That's the number one item. Um, I think cross cloud is definitely totally happening, I think people are really starting to think about that and so I'd love to get your comments on that. But I think the thing that jumped out at me was the devops piece because this is a trend that I've been seeing a lot more certainly even in academic institutions, for folks in school, right? Um going to college for computer science and engineering. This idea of, sorry, large scale, cloud is not so much an IT practice, it's much more of a cloud native mindset. So I think this idea of of ops so much more about scale. I use SRE only because I can't think of a better word around it and certainly the edge pieces with kubernetes, I think this is the, I think the biggest story to me that's where all the action seems to be when I talk to people around what they're working on in terms of training new people on boarding and what not Katie, you're shaking your head, you're like Yeah, what's your thoughts? Yeah, >>I have definitely been uh through all of these stages from having a team where the develops, I think it's more of a culture of like a pattern to adopt within an organization more than anything. So I've been pre develops within develops and actually during the evolution of it where we actually added an s every team as well. Um I think having these cultural changes with an organization, they are necessary, especially they want to iterate iterate quicker and actually deliver value to the customers with minimal agency because what it actually does there is the collaboration between teams which were initially segregated. And that's why I think there is a paradigm nowadays which is called deficit ops, which actually moves security more to its left. This has been very popular, especially in the, in the latest a couple of months. Lots of talks around it and even there is like a security co located event of Yukon just going to focus on that mainly. Um, but as well within the Devil's area, um, one of the models that has been quite permanent has been get ups as well, which pretty much uses the power of gIT repositories to describe the state of the applications, how it actually should be within the production system and within the cloud native ecosystem. There are two main tools that pretty much leave this area and there's going to be Argo City which has been donated by, into it, which is our end user And we have flux as well, which has been donated by we've works and both of these projects currently are within the incubation stage, which pretty much by default um showcases there is a lot of adoption from the organizations um more than 100 of for for some of them. So there is a wider adoption um, and everything I would like to mention is the get ups working group which has emerged I think between que con europe and north America last year and that again is more to define a manifest of how exactly get expert and should be adopted within organizations. So there is a lot of, I would say initiatives and this is further out they confirmed with the tooling that we have within the ecosystem. >>That's really awesome insight. I want to just, if you don't mind follow up on that, why is getups so important right now, Is it because the emphasis of security is that the emphasis of more scale, Is it just because it's pretty much kid was okay just because storing it over there, Is it because there's so much more inspections are going on around it? I mean code reviews have been going on for a long time. What's what's the big deal? Why is it so hot right now? In your opinion? >>I think there is definitely a couple of aspects that are quite important. You mentioned security, that's definitely one of them with the get ups battery. And there is a pool model rather than a push model. So you have the actual tool, for example, our great city of flux watching for repository and if any changes are identified is going to pull those changes automatically. So the first thing that we actually can see from this model is that we always will have a delta between what's within our depositors and the production system. Usually if you have a pool model, you can pull it uh can push the changes towards death staging environment but not always the production because you have the change window sometimes with the get ups model, you'll always be aware of what's the Dell. Can you have quite a nice way to visualize that especially for your city, which has the UI as well as well with the get ups pattern, there is less necessity to share the credentials with the actual pipeline tool. All of because Argo flux there are natively build around communities, all the secrets are going to be residing within the cluster. There is no need to share any extra credentials or an extra permissions with external tools as well. There are scale, there is again with kids who have historical data points which allows us to easily revert um to stable points of the applications in the past. So multiple, multiple benefits I would say, but definitely secured. I think it's one of the main one and it has been talked about quite a lot as well. >>A lot of these end user stories revolve around these dynamics and the ones you guys are promoting and from your members as well as in the community at large is I hate to use the word day two operations, but that really is the issue like okay, we're up and running. I want more automation. This is again tops kind of vibe here where it's like okay we gotta go troubleshoot all this, but it should be working as more stuff comes in. This becomes more and more the dynamic is that is that because of just more edges, more things, more devices, what's what's the what's the push behind all these stories around this automation and day to operation things? What do you guys think? >>I think, I think the expectations are getting higher and higher to be honest, a few years ago it was enough to use containers and start using the barest minimum, you know, to orchestrate those containers. But now what we see is that, you know, it's easy to choose the technology, it's easy to install it and even configure it. But as you said, john those data operations are really, really hard. For example, one of the ones that we've seen up and coming and we care about from CNCF is kubernetes on the edge. And we see this as enabling telco use cases and 5G and IOT and really, really broad, difficult use cases that just a few years ago would have been nice on impossible, Katie, your zone, Katie Katie, you also talk about edge. Right? >>Absolutely. I think I I really like to watch some of the talks that keep going, especially given by the big organizations that have to manage thousands or tens of thousands, hundreds of thousands of customers. And they have to deliver a cluster to these to these teams. Now, from their point of view, they pretty much have to manage clusters at scale. There is definitely the edge out there and they really kind of pushing the technology towards how can we get closer to the physical devices within the customers? Kind of uh, let's say bubble or area in surface. So age has been definitely something which has been moving a lot when it comes to the cloud native ecosystem. We've had a lot of projects moving to towards the incubation stage, carefree as has been there, um, for for a while and again, has a lot of adoption is known for its stability. But another thing that I would like to mention is that now currently we have a lot of projects that are age focus but within some box, so there is again, a lot of potential if there's gonna be a higher demand for this, I would expect this tools move from sandbox to incubation and even graduation. So that's definitely something which, uh, it's moving and there is dynamism around it. >>Well, Cheryl kid, you guys are awesome, love the work you're doing. I gotta ask the final question since you brought it up about the expectations. Cheryl, if you guys could both end the segment with the comment around expectations as the industry and companies and developers and participants continue to grow. What, what's changed with C N C F koo Kahne cloud, native khan as the expectation has been growing and the stakes are higher too, frankly, I mean you've got security, you mentioned these things edge get up, so you start to see the maturation of this ecosystem, what's new and what's expected of you guys, What do you see and how are you guys organizing? >>I think we can definitely say the ecosystem has matured a lot compared to a few years ago. Same with CNTF, same with Cuba con, I think the very first cubic on I went to was Berlin, which was about 1800 people. Um, the kind of mind boggling to see how much, how much it's grown since then. I mean one of the things that we try and do is to expand the number of people who can reach the community. So for example, we launched kubernetes community days and we launched, that means community organized events in africa, for example, for people who couldn't come to large events in north America or europe, um we also launching things to help students. I actually love talking to students because quite often now you talk to them and they say, oh, I've never run software in anything other than a container. You're like, yeah, well this was a new thing, this is brand new a few years ago and now you can be 18 and have never tried anything else. So it's pretty amazing. But yeah, there's definitely, there's always space to go to the community. >>Yeah, once you go cloud native, it's like, you know, like you've never load Lennox on them server before. I mean, what, what's going on? Get your thoughts as expectations go higher And certainly there's more in migration, not only for young folks because they're jumping into this was that engineering meets computer science is now cross discipline. You're seeing scale, you mentioned scaling up those are huge factors, you've got younger, you got cross training, you got cybersecurity and you've got Fin tech ops that's chris is working on so much is happening. What, what, what you guys keep up with your, how you gonna raise the ball? >>Absolutely. I think there's definitely technology moving forward, but I think nowadays there is a more need for actual end user stories while at the beginning of cube cons there is a lot of focus on the technical aspects. How can you fix this particular problem of deploying between two clusters are deploying at scale. There is like a lot of technical aspects nowadays they're looking for the stories because as I mentioned before, not one platform is gonna be the same when it comes to cloud native and I think there's still, the community is still trying to look for some patterns or some standards and we actually can see like especially when it comes to the open standards, we can see this moving within um the observe abilities like that application delivery will have for example cross plane and Que Bella we have open metrics and open tracing as well, which focuses on observe ability and all of the interfaces that we had around um, Cuban directory service men and so forth. All of these pretty much try to bring a benchmark, making it easier to integrate these special use cases um when it comes to actual extreme technology kind of solutions that you need to provide and um, I was mentioning the end user stories that are there more in demand nowadays mainly because these are very, very necessary from the community like for example the six or the project maintainers, they require feedback to actually move forward. And as part of that, I would like to mention that we've recently soft launched the injuries lounge, which really focuses on this particular aspect of end user stories. We try to pretty much question our end users and really understand what really moved them to adopt, coordinative, what keeps them on this path and what like future challenges they would like to um to tackle or are they facing the moment I would like to solve in the future. So we're trying to create the speed back home between the inducers and the projects out there. So I think this is something which needs to be a bit more closely together these two spheres, which currently are segregated, but we're trying to just solve that. >>Also you guys do great work, great job. Cheryl wrap us up real, take a minute to put a plug in for the C. N. C. F. In the ecosystem. What's the fashion this year? What's hot? What's the trend? What are you guys doing? Share some quick update on what's going on the ecosystem from your perspective? >>Yeah, I mean the ecosystem, even though I just said that we're maturing, you know, the growth has not stopped now, what we're seeing is these as Casey was saying, you know, more specific use cases, even bigger, even more demanding environments, even more kind of crazy use cases. I mean I love the story from the U. S. Department of Defense about putting kubernetes on their fighter jets and putting ston fighter jets, you know, it's just absurd to think about it, but I would say definitely come and be part of the community, share your stories, share what you know, help other people um if you are end user of these technologies then go to see NCF dot io slash and user and just come and be part of our community, you know, meet your peers and hear what everybody else is doing >>well. Having kubernetes and stu on jets, that's the Air Force, I would call that technical edge Katie to you know, bring, bring back the edge carol kitty, thank you so much for sharing the inside ecosystem is robust. Rising tide is floating all the boats as we always say here in the cube, it's been great to watch and continue to watch the rise. I think it's just the beginning, we're starting to see post pandemic visibility cloud native, more standards, more visibility into the economics and value and great to see the ecosystem rising up with the end users as well. So congratulations and thanks for coming up. >>Thank you so much, john it's a pleasure, appreciate >>it. Thank you for having us, john >>Great to have you on. I'm john for with the cube here for Coop Con Cloud, Native Con 21 virtual soon we'll be back in real life. Thanks for watching. Mhm.

Published Date : May 5 2021

SUMMARY :

of the C N C s annual event this year. um, you know, disrupted by this, but you know, the consensus is that developers have used to been working remotely in the middle of Covid, she joined CN CF. the face is going to be the same. and the use and user ecosystem, how have you guys seen the growth? I mean, I can talk directly about C N C F and the I mean, you have been there, They're looking to have this competitive edge when it comes Cheryl like you to define what you mean when you say end user driven open Mm This is a really interesting dynamic that I've seen over the last couple of years. I'm sure that the ability to do more collaboration, So I would say that if you are an end user company and you have for cloud native, which I found interesting because there's so much action going on, you have to break things out into pillars, I would love to hear from you whether I think the biggest story to me that's where all the action seems to be when I talk to people around what they're I think it's more of a culture of like a pattern to adopt within an organization more than anything. I want to just, if you don't mind follow up on that, why is getups so always the production because you have the change window sometimes with the get ups model, ones you guys are promoting and from your members as well as in the community at large is I you know, it's easy to choose the technology, it's easy to install it and especially given by the big organizations that have to manage thousands or tens of you guys, What do you see and how are you guys organizing? I actually love talking to students because quite often now you talk to them Yeah, once you go cloud native, it's like, you know, like you've never load Lennox on them server before. cases um when it comes to actual extreme technology kind of solutions that you need to provide and What's the fashion this year? and just come and be part of our community, you know, meet your peers and hear what everybody else is Katie to you know, bring, bring back the edge carol kitty, thank you so much for sharing the Great to have you on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KatiePERSON

0.99+

CitibankORGANIZATION

0.99+

Katie GamanjiPERSON

0.99+

AirbnbORGANIZATION

0.99+

CherylPERSON

0.99+

Katie ManjiPERSON

0.99+

Cheryl HungPERSON

0.99+

American ExpressORGANIZATION

0.99+

ChrisPERSON

0.99+

Conde NastORGANIZATION

0.99+

john KerryPERSON

0.99+

PelotonORGANIZATION

0.99+

thousandsQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

CaseyPERSON

0.99+

U. S. Department of DefenseORGANIZATION

0.99+

africaLOCATION

0.99+

last yearDATE

0.99+

north AmericaLOCATION

0.99+

UberORGANIZATION

0.99+

europeLOCATION

0.99+

johnPERSON

0.99+

18QUANTITY

0.99+

Cheryl KatiePERSON

0.99+

10QUANTITY

0.99+

bothQUANTITY

0.98+

two clustersQUANTITY

0.98+

american expressORGANIZATION

0.98+

Cuba conEVENT

0.98+

this yearDATE

0.98+

BerlinLOCATION

0.98+

one platformQUANTITY

0.98+

sixQUANTITY

0.98+

oneQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

YukonLOCATION

0.98+

DellORGANIZATION

0.98+

CNCFORGANIZATION

0.98+

both sidesQUANTITY

0.98+

CloudNativeConEVENT

0.97+

telcoORGANIZATION

0.97+

two main toolsQUANTITY

0.97+

chrisPERSON

0.97+

ZaraORGANIZATION

0.97+

more than 100QUANTITY

0.96+

C. N. C. F.LOCATION

0.96+

pandemicEVENT

0.96+

first thingQUANTITY

0.96+

CNC FORGANIZATION

0.95+

two great guestsQUANTITY

0.95+

twitterORGANIZATION

0.95+

KubeConEVENT

0.95+

about 1800 peopleQUANTITY

0.94+

two spheresQUANTITY

0.94+

red hatORGANIZATION

0.93+

each oneQUANTITY

0.93+

Katie KatiePERSON

0.93+

CubanOTHER

0.92+

few years agoDATE

0.92+

first cubicQUANTITY

0.91+

CN CF.ORGANIZATION

0.91+

Coop Con CloudEVENT

0.9+

tens of thousandsQUANTITY

0.9+

LennoxORGANIZATION

0.87+

Tiji Mathew, Patrick Zimet and Senthil Karuppaiah | Io-Tahoe Data Quality Active DQ


 

(upbeat music), (logo pop up) >> Narrator: From around the globe it's theCUBE. Presenting active DQ intelligent automation for data quality brought to you by IO-Tahoe. >> Are you ready to see active DQ on Snowflake in action? Let's get into the show and tell him, do the demo. With me or Tiji Matthew, the Data Solutions Engineer at IO-Tahoe. Also joining us is Patrick Zeimet Data Solutions Engineer at IO-Tahoe and Senthilnathan Karuppaiah, who's the Head of Production Engineering at IO-Tahoe. Patrick, over to you let's see it. >> Hey Dave, thank you so much. Yeah, we've seen a huge increase in the number of organizations interested in Snowflake implementation. Were looking for an innovative, precise and timely method to ingest their data into Snowflake. And where we are seeing a lot of success is a ground up method utilizing both IO-Tahoe and Snowflake. To start you define your as is model. By leveraging IO-Tahoe to profile your various data sources and push the metadata to Snowflake. Meaning we create a data catalog within Snowflake for a centralized location to document items such as source system owners allowing you to have those key conversations and understand the data's lineage, potential blockers and what data is readily available for ingestion. Once the data catalog is built you have a much more dynamic strategies surrounding your Snowflake ingestion. And what's great is that while you're working through those key conversations IO-Tahoe will maintain that metadata push and partnered with Snowflake ability to version the data. You can easily incorporate potential scheme changes along the way. Making sure that the information that you're working on stays as current as the systems that you're hoping to integrate with Snowflake. >> Nice, Patrick I wonder if you could address how you IO-Tahoe Platform Scales and maybe in what way it provides a competitive advantage for customers. >> Great question where IO-Tahoe shines is through its active DQ or the ability to monitor your data's quality in real time. Marking which roads need remediation. According to the customized business rules that you can set. Ensuring that the data quality standards meet the requirements of your organizations. What's great is through our use of RPA. We can scale with an organization. So as you ingest more data sources we can allocate more robotic workers meaning the results will continue to be delivered in the same timely fashion you've grown used to. What's Morrisons IO-Tahoe is doing the heavy lifting on monitoring data quality. That's frees up your data experts to focus on the more strategic tasks such as remediation that augmentations and analytics developments. >> Okay, maybe Tiji, you could address this. I mean, how does all this automation change the operating model that we were talking to to Aj and Dunkin before about that? I mean, if it involves less people and more automation what else can I do in parallel? >> I'm sure the participants today will also be asking the same question. Let me start with the strategic tasks Patrick mentioned, Io-Tahoe does the heavy lifting. Freeing up data experts to act upon the data events generated by IO-Tahoe. Companies that have teams focused on manually building their inventory of the data landscape. Leads to longer turnaround times in producing actionable insights from their own data assets. Thus, diminishing the value realized by traditional methods. However, our operating model involves profiling and remediating at the same time creating a catalog data estate that can be used by business or IT accordingly. With increased automation and fewer people. Our machine learning algorithms augment the data pipeline to tag and capture the data elements into a comprehensive data catalog. As IO-Tahoe automatically catalogs the data estate in a centralized view, the data experts can partly focus on remediating the data events generated from validating against business rules. We envision that data events coupled with this drillable and searchable view will be a comprehensive one to assess the impact of bad quality data. Let's briefly look at the image on screen. For example, the view indicates that bad quality zip code data impacts the contact data which in turn impacts other related entities in systems. Now contrast that with a manually maintained spreadsheet that drowns out the main focus of your analysis. >> Tiji, how do you tag and capture bad quality data and stop that from you've mentioned these printed dependencies. How do you stop that from flowing downstream into the processes within the applications or reports? >> As IO-Tahoe builds the data catalog across source systems. We tag the elements that meet the business rule criteria while segregating the failed data examples associated with the elements that fall below a certain threshold. The elements that meet the business rule criteria are tagged to be searchable. Thus, providing an easy way to identify data elements that may flow through the system. The segregated data examples on the other hand are used by data experts to triage for the root cause. Based on the root cause potential outcomes could be one, changes in the source system to prevent that data from entering the system in the first place. Two, add data pipeline logic, to sanitize bad data from being consumed by downstream applications and reports or just accept the risk of storing bad data and address it when it meets a certain threshold. However, Dave as for your question about preventing bad quality data from flowing into the system? IO-Tahoe will not prevent it because the controls of data flowing between systems is managed outside of IO-Tahoe. Although, IO-Tahoe will alert and notify the data experts to events that indicate bad data has entered the monitored assets. Also we have redesigned our product to be modular and extensible. This allows data events generated by IO-Tahoe to be consumed by any system that wants to control the targets from bad data. Does IO-Tahoe empowers the data experts to control the bad data from flowing into their system. >> Thank you for that. So, one of the things that we've noticed, we've written about is that you've got these hyper specialized roles within the data, the centralized data organization. And wonder how do the data folks get involved here if at all, and how frequently do they get involved? Maybe Senthilnathan you could take that. >> Thank you, Dave for having me here. Well, based on whether the data element in question is in data cataloging or monitoring phase. Different data folks gets involved. When it isn't in the data cataloging stage. The data governance team, along with enterprise architecture or IT involved in setting up the data catalog. Which includes identifying the critical data elements business term identification, definition, documentation data quality rules, and data even set up data domain and business line mapping, lineage PA tracking source of truth. So on and so forth. It's typically in one time set up review certify then govern and monitor. But while when it is in the monitoring phase during any data incident or data issues IO-Tahoe broadcast data signals to the relevant data folks to act and remedy it as quick as possible. And alerts the consumption team it could be the data science, analytics, business opts are both a potential issue so that they are aware and take necessary preventative measure. Let me show you an example, critical data element from data quality dashboard view to lineage view to data 360 degree view for a zip code for conformity check. So in this case the zip code did not meet the past threshold during the technical data quality check and was identified as non-compliant item and notification was sent to the ID folks. So clicking on the zip code. Will take to the lineage view to visualize the dependent system, says that who are producers and who are the consumers. And further drilling down will take us to the detailed view, that a lot of other information's are presented to facilitate for a root cause analysis and not to take it to a final closure. >> Thank you for that. So Tiji? Patrick was talking about the as is to be. So I'm interested in how it's done now versus before. Do you need a data governance operating model for example? >> Typically a company that decides to make an inventory of the data assets would start out by manually building a spreadsheet managed by data experts of the company. What started as a draft now get break into the model of a company. This leads to loss of collaboration as each department makes a copy of their catalog for their specific needs. This decentralized approach leads to loss of uniformity which each department having different definitions which ironically needs a governance model for the data catalog itself. And as the spreadsheet grows in complexity the skill level needed to maintain. It also increases thus leading to fewer and fewer people knowing how to maintain it. About all the content that took so much time and effort to build is not searchable outside of that spreadsheet document. >> Yeah, I think you really hit the nail on my head Tiji. Now companies want to move away from the spreadsheet approach. IO-Tahoe addresses the shortcoming of the traditional approach enabling companies to achieve more with less. >> Yeah, what the customer reaction has been. We had Webster Bank, on one of the early episodes for example, I mean could they have achieved. What they did without something like active data quality and automation maybe Senthilnathan you could address that? >> Sure, It is impossible to achieve full data quality monitoring and remediation without automation or digital workers in place reality that introverts they don't have the time to do the remediation manually because they have to do an analysis conform fix on any data quality issues, as fast as possible before it gets bigger and no exception to Webster. That's why Webster implemented IO-Tahoe's active DQ to set up the business, metadata management and data quality monitoring and remediation in the Snowflake cloud data Lake. We help and building the center of excellence in the data governance, which is managing the data catalog schedule on demand and in-flight data quality checks, but Snowflake, no pipe on stream are super beneficial to achieve in flight quality checks. Then the data assumption monitoring and reporting last but not the least the time saver is persisting the non-compliant records for every data quality run within the Snowflake cloud, along with remediation script. So that during any exceptions the respect to team members is not only alerted. But also supplied with necessary scripts and tools to perform remediation right from the IO-Tahoe's Active DQ. >> Very nice. Okay guys, thanks for the demo. Great stuff. Now, if you want to learn more about the IO-Tahoe platform and how you can accelerate your adoption of Snowflake book some time with a data RPA expert all you got to do is click on the demo icon on the right of your screen and set a meeting. We appreciate you attending this latest episode of the IO-Tahoe data automation series. Look, if you missed any of the content that's all available on demand. This is Dave Vellante theCUBE. Thanks for watching. (upbeat music)

Published Date : Apr 29 2021

SUMMARY :

the globe it's theCUBE. and tell him, do the demo. and push the metadata to Snowflake. if you could address or the ability to monitor the operating model on remediating the data events generated into the processes within the data experts to events that indicate So, one of the things that So clicking on the zip code. Thank you for that. the skill level needed to maintain. of the traditional approach one of the early episodes So that during any exceptions the respect of the IO-Tahoe data automation series.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Tiji MatthewPERSON

0.99+

Tiji MathewPERSON

0.99+

Senthil KaruppaiahPERSON

0.99+

Patrick ZimetPERSON

0.99+

IO-TahoeORGANIZATION

0.99+

Io-TahoeORGANIZATION

0.99+

TijiPERSON

0.99+

360 degreeQUANTITY

0.99+

Senthilnathan KaruppaiahPERSON

0.99+

each departmentQUANTITY

0.99+

SnowflakeTITLE

0.99+

todayDATE

0.99+

WebsterORGANIZATION

0.99+

AjPERSON

0.99+

DunkinPERSON

0.98+

TwoQUANTITY

0.98+

IOORGANIZATION

0.97+

Patrick ZeimetPERSON

0.97+

Webster BankORGANIZATION

0.97+

oneQUANTITY

0.97+

one timeQUANTITY

0.97+

bothQUANTITY

0.96+

SenthilnathanPERSON

0.96+

IO-TahoeTITLE

0.93+

first placeQUANTITY

0.89+

IOTITLE

0.72+

SnowflakeEVENT

0.71+

TahoeORGANIZATION

0.69+

Data SolutionsORGANIZATION

0.69+

-TahoeTITLE

0.64+

TahoeTITLE

0.63+

SnowflakeORGANIZATION

0.6+

MorrisonsORGANIZATION

0.6+

Ajay Vohora and Duncan Turnbull | Io-Tahoe ActiveDQ Intelligent Automation for Data Quality


 

>>From around the globe, but it's the cube presenting active DQ, intelligent automation for data quality brought to you by IO Tahoe. >>Now we're going to look at the role automation plays in mobilizing your data on snowflake. Let's welcome. And Duncan Turnbull who's partner sales engineer at snowflake and AIG Vihara is back CEO of IO. Tahoe is going to share his insight. Gentlemen. Welcome. >>Thank you, David. Good to have you back. Yeah, it's great to have you back >>A J uh, and it's really good to CIO Tao expanding the ecosystem so important. Um, now of course bringing snowflake and it looks like you're really starting to build momentum. I mean, there's progress that we've seen every month, month by month, over the past 12, 14 months, your seed investors, they gotta be happy. >>They are all that happy. And then I can see that we run into a nice phase of expansion here and new customers signing up. And now you're ready to go out and raise that next round of funding. I think, um, maybe think of a slight snowflake five years ago. So we're definitely on track with that. A lot of interest from investors and, um, we're right now trying to focus in on those investors that can partner with us, understand AI data and, and automation. >>So personally, I mean, you've managed a number of early stage VC funds. I think four of them, uh, you've taken several comp, uh, software companies through many funding rounds and growth and all the way to exit. So, you know how it works, you have to get product market fit, you know, you gotta make sure you get your KPIs, right. And you gotta hire the right salespeople, but, but what's different this time around, >>Uh, well, you know, the fundamentals that you mentioned though, those are never change. And, um, what we can say, what I can say that's different, that's shifted, uh, this time around is three things. One in that they used to be this kind of choice of, do we go open source or do we go proprietary? Um, now that has turned into, um, a nice hybrid model where we've really keyed into, um, you know, red hat doing something similar with Santos. And the idea here is that there is a core capability of technology that independence a platform, but it's the ability to then build an ecosystem around that made a pervade community. And that community may include customers, uh, technology partners, other tech vendors, and enabling the platform adoption so that all of those folks in that community can build and contribute, um, while still maintaining the core architecture and platform integrity, uh, at the core of it. >>And that's one thing that's changed was fitting a lot of that type of software company, um, emerge into that model, which is different from five years ago. Um, and then leveraging the cloud, um, every cloud snowflake cloud being one of them here in order to make use of what customers, uh, and customers and enterprise software are moving towards. Uh, every CIO is now in some configuration of a hybrid. Um, it is state whether those cloud multi-cloud on prem. That's just the reality. The other piece is in dealing with the CIO is legacy. So the past 15, 20 years they've purchased many different platforms, technologies, and some of those are still established and still, how do you, um, enable that CIO to make purchase while still preserving and in some cases building on and extending the, the legacy, um, material technology. So they've invested their people's time and training and financial investment into solving a problem, customer pain point, uh, with technology, but, uh, never goes out of fashion >>That never changes. You have to focus like a laser on that. And of course, uh, speaking of companies who are focused on solving problems, don't can turn bill from snowflake. You guys have really done a great job and really brilliantly addressing pain points, particularly around data warehousing, simplified that you're providing this new capability around data sharing, uh, really quite amazing. Um, Dunkin AAJ talks about data quality and customer pain points, uh, in, in enterprise. It, why is data quality been such a problem historically? >>Oh, sorry. One of the biggest challenges that's really affected by it in the past is that because to address everyone's need for using data, they've evolved all these kinds of different places to store all these different silos or data marts or all this kind of clarification of places where data lives and all of those end up with slightly different schedules to bringing data in and out. They end up with slightly different rules for transforming that data and formatting it and getting it ready and slightly different quality checks for making use of it. And this then becomes like a big problem in that these different teams are then going to have slightly different or even radically different ounces to the same kinds of questions, which makes it very hard for teams to work together, uh, on their different data problems that exist inside the business, depending on which of these silos they end up looking at and what you can do. If you have a single kind of scalable system for putting all of your data into it, you can kind of sidestep along to this complexity and you can address the data quality issues in a, in a single and a single way. >>Now, of course, we're seeing this huge trend in the market towards robotic process automation, RPA, that adoption is accelerating. Uh, you see, in UI paths, I IPO, you know, 35 plus billion dollars, uh, valuation, you know, snowflake like numbers, nice cops there for sure. Uh, agent you've coined the phrase data RPA, what is that in simple terms? >>Yeah, I mean, it was born out of, uh, seeing how in our ecosystem concern community developers and customers, uh, general business users for wanting to adopt and deploy a tar hose technology. And we could see that, um, I mean, there's not monkeying out PA we're not trying to automate that piece, but wherever there is a process that was tied into some form of a manual overhead with handovers and so on. Um, that process is something that we were able to automate with, with our ties technology and, and the deployment of AI and machine learning technologies specifically to those data processes almost as a precursor to getting into financial automation that, um, that's really where we're seeing the momentum pick up, especially in the last six months. And we've kept it really simple with snowflake. We've kind of stepped back and said, well, you know, the resource that a snowflake can leverage here is, is the metadata. So how could we turn snowflake into that repository of being the data catalog? And by the way, if you're a CIO looking to purchase a data catalog tool stop, there's no need to, um, working with snowflake, we've enable that intelligence to be gathered automatically and to be put, to use within snowflake. So reducing that manual effort, and I'm putting that data to work. And, um, and that's where we've packaged this with, uh, AI machine learning specific to those data tasks. Um, and it made sense that's, what's resonated with, with our customers. >>You know, what's interesting here, just a quick aside, as you know, I've been watching snowflake now for awhile and, and you know, of course the, the competitors come out and maybe criticize why they don't have this feature. They don't have that feature. And it's snowflake seems to have an answer. And the answer oftentimes is, well, its ecosystem ecosystem is going to bring that because we have a platform that's so easy to work with though. So I'm interested Duncan in what kind of collaborations you are enabling with high quality data. And of course, you know, your data sharing capability. >>Yeah. So I think, uh, you know, the ability to work on, on datasets, isn't just limited to inside the business itself or even between different business units. And we were kind of discussing maybe with their silos. Therefore, when looking at this idea of collaboration, we have these where we want to be >>Able to exploit data to the greatest degree possible, but we need to maintain the security, the safety, the privacy, and governance of that data. It could be quite valuable. It could be quite personal depending on the application involved. One of these novel applications that we see between organizations of data sharing is this idea of data clean rooms. And these data clean rooms are safe, collaborative spaces, which allow multiple companies or even divisions inside a company where they have particular, uh, privacy requirements to bring two or more data sets together for analysis. But without having to actually share the whole unprotected data set with each other, and this lets you to, you know, when you do this inside of snowflake, you can collaborate using standard tool sets. You can use all of our SQL ecosystem. You can use all of the data science ecosystem that works with snowflake. >>You can use all of the BI ecosystem that works with snowflake, but you can do that in a way that keeps the confidentiality that needs to be presented inside the data intact. And you can only really do these kinds of, uh, collaborations, especially across organization, but even inside large enterprises, when you have good reliable data to work with, otherwise your analysis just isn't going to really work properly. A good example of this is one of our large gaming customers. Who's an advertiser. They were able to build targeting ads to acquire customers and measure the campaign impact in revenue, but they were able to keep their data safe and secure while doing that while working with advertising partners, uh, the business impact of that was they're able to get a lifted 20 to 25% in campaign effectiveness through better targeting and actually, uh, pull through into that of a reduction in customer acquisition costs because they just didn't have to spend as much on the forms of media that weren't working for them. >>So, ha I wonder, I mean, you know, with, with the way public policy shaping out, you know, obviously GDPR started it in the States, you know, California, consumer privacy act, and people are sort of taking the best of those. And, and, and there's a lot of differentiation, but what are you seeing just in terms of, you know, the government's really driving this, this move to privacy, >>Um, government public sector, we're seeing a huge wake up an activity and, uh, across the whole piece that, um, part of it has been data privacy. Um, the other part of it is being more joined up and more digital rather than paper or form based. Um, we've all got stories of waiting in line, holding a form, taking that form to the front of the line and handing it over a desk. Now government and public sector is really looking to transform their services into being online, to show self service. Um, and that whole shift is then driving the need to, um, emulate a lot of what the commercial sector is doing, um, to automate their processes and to unlock the data from silos to put through into those, uh, those processes. Um, and another thing I can say about this is they, the need for data quality is as a Dunkin mentions underpins all of these processes, government pharmaceuticals, utilities, banking, insurance, the ability for a chief marketing officer to drive a, a loyalty campaign. >>They, the ability for a CFO to reconcile accounts at the end of the month. So do a, a, uh, a quick, accurate financial close. Um, also the, the ability of a customer operations to make sure that the customer has the right details about themselves in the right, uh, application that they can sell. So from all of that is underpinned by data and is effective or not based on the quality of that data. So whilst we're mobilizing data to snowflake cloud, the ability to then drive analytics, prediction, business processes off that cloud, um, succeeds or fails on the quality of that data. >>I mean it, and, you know, I would say, I mean, it really is table stakes. If you don't trust the data, you're not gonna use the data. The problem is it always takes so long to get to the data quality. There's all these endless debates about it. So we've been doing a fair amount of work and thinking around this idea of decentralized data, data by its very nature is decentralized, but the fault domains of traditional big data is that everything is just monolithic and the organizations monolithic technology's monolithic, the roles are very, you know, hyper specialized. And so you're hearing a lot more these days about this notion of a data fabric or what calls a data mesh. Uh, and we've kind of been leaning in to that and the ability to, to connect various data capabilities, whether it's a data warehouse or a data hub or a data Lake that those assets are discoverable, they're shareable through API APIs and they're governed on a federated basis. And you're using now bringing in a machine intelligence to improve data quality. You know, I wonder Duncan, if you could talk a little bit about Snowflake's approach to this topic. >>Sure. So I'd say that, you know, making use of all of your data, is there a key kind of driver behind these ideas that they can mesh into the data fabrics? And the idea is that you want to bring together not just your kind of strategic data, but also your legacy data and everything that you have inside the enterprise. I think I'd also like to kind of expand upon what a lot of people view as all of the data. And I think that a lot of people kind of miss that there's this whole other world of data they could be having access to, which is things like data from their business partners, their customers, their suppliers, and even stuff that's more in the public domain, whether that's, you know, demographic data or geographic or all these kinds of other types of data sources. And what I'd say to some extent is that the data cloud really facilitates the ability to share and gain access to this both kind of between organizations inside organizations. >>And you don't have to, you know, make lots of copies of the data and kind of worry about the storage and this federated, um, you know, idea of governance and all these things that it's quite complex to kind of manage this. Uh, you know, the snowflake approach really enables you to share data with your ecosystem all the world, without any latency with full control over what's shared without having to introduce new complexities or having complex attractions with APIs or software integration. The simple approach that we provide allows a relentless focus on creating the right data product to meet the challenges facing your business today. >>So, Andrea, the key here is to don't get to talking about it in my mind. Anyway, my cake takeaway is to simplicity. If you can take the complexity out of the equation, we're going to get more adoption. It really is that simple. >>Yeah, absolutely. Do you think that that whole journey, maybe five, six years ago, the adoption of data lakes was, was a stepping stone. Uh, however, the Achilles heel there was, you know, the complexity that it shifted towards consuming that data from a data Lake where there were many, many sets of data, um, to, to be able to cure rate and to, um, to consume, uh, whereas actually, you know, the simplicity of being able to go to the data that you need to do your role, whether you're in tax compliance or in customer services is, is key. And, you know, listen for snowflake by auto. One thing we know for sure is that our customers are super small and they're very capable. They're they're data savvy and know, want to use whichever tool and embrace whichever, um, cloud platform that is gonna reduce the barriers to solving. What's complex about that data, simplifying that and using, um, good old fashioned SQL, um, to access data and to build products from it to exploit that data. So, um, simplicity is, is key to it to allow people to, to, to make use of that data. And CIO is recognize that >>So Duncan, the cloud obviously brought in this notion of dev ops, um, and new methodologies and things like agile that brought that's brought in the notion of data ops, which is a very hot topic right now. Um, basically dev ops applies to data about how D how does snowflake think about this? How do you facilitate that methodology? >>Yeah, sorry. I agree with you absolutely. That they drops takes these ideas of agile development of >>Agile delivery and of the kind of dev ops world that we've seen just rise and rise, and it applies them to the data pipeline, which is somewhere where it kind of traditionally hasn't happened. And it's the same kinds of messages as we see in the development world, it's about delivering faster development, having better repeatability and really getting towards that dream of the data-driven enterprise, you know, where you can answer people's data questions, they can make better business decisions. And we have some really great architectural advantages that allow us to do things like allow cloning of data sets without having to copy them, allows us to do things like time travel so we can see what data looked like at some point in the past. And this lets you kind of set up both your own kind of little data playpen as a clone without really having to copy all of that data. >>So it's quick and easy, and you can also, again, with our separation of storage and compute, you can provision your own virtual warehouse for dev usage. So you're not interfering with anything to do with people's production usage of this data. So the, these ideas, the scalability, it just makes it easy to make changes, test them, see what the effect of those changes are. And we've actually seen this. You were talking a lot about partner ecosystems earlier. Uh, the partner ecosystem has taken these ideas that are inside snowflake and they've extended them. They've integrated them with, uh, dev ops and data ops tooling. So things like version control and get an infrastructure automation and things like Terraform. And they've kind of built that out into more of a data ops products that, that you can, you can make yourself so we can see there's a huge impact of, of these ideas coming into the data world. >>We think we're really well-placed to take advantage to them. The partner ecosystem is doing a great job with doing that. And it really allows us to kind of change that operating model for data so that we don't have as much emphasis on like hierarchy and change windows and all these kinds of things that are maybe use as a lot of fashioned. And we kind of taking the shift from this batch data integration into, you know, streaming continuous data pipelines in the cloud. And this kind of gets you away from like a once a week or once a month change window, if you're really unlucky to, you know, pushing changes, uh, in a much more rapid fashion as the needs of the business change. >>I mean, those hierarchical organizational structures, uh, w when we apply those to begin to that, what it actually creates the silos. So if you're going to be a silo Buster, which aji look at you guys in silo busters, you've got to put data in the hands of the domain experts, the business people, they know what data they want, if they have to go through and beg and borrow for a new data sets, et cetera. And so that's where automation becomes so key. And frankly, the technology should be an implementation detail, not the dictating factor. I wonder if you could comment on this. >>Yeah, absolutely. I think, um, making the, the technologies more accessible to the general business users >>Or those specialists business teams that, um, that's the key to unlocking is it is interesting to see is as people move from organization to organization where they've had those experiences operating in a hierarchical sense, I want to break free from that and, um, or have been exposed to, um, automation, continuous workflows, um, change is continuous in it. It's continuous in business, the market's continuously changing. So having that flow across the organization of work, using key components, such as get hub, similar to what you drive process Terraform to build in, um, code into the process, um, and automation and with a high Tahoe leveraging all the metadata from across those fragmented sources is, is, is good to say how those things are coming together. And watching people move from organization to organization say, Hey, okay, I've got a new start. I've got my first hundred days to impress my, my new manager. >>Uh, what kind of an impact can I, um, bring to this? And quite often we're seeing that as, you know, let me take away the good learnings from how to do it, or how not to do it from my previous role. And this is an opportunity for me to, to bring in automation. And I'll give you an example, David, you know, recently started working with a, a client in financial services. Who's an asset manager, uh, managing financial assets. They've grown over the course of the last 10 years through M and a, and each of those acquisitions have bought with it tactical data. It's saying instead of data of multiple CRM systems now multiple databases, multiple bespoke in-house created applications. And when the new CIO came in and had a look at those well, you know, yes, I want to mobilize my data. Yes, I need to modernize my data state because my CEO is now looking at these crypto assets that are on the horizon and the new funds that are emerging that around digital assets and crypto assets. >>But in order to get to that where absolutely data underpins and is the core asset, um, cleaning up that, that legacy situation mobilizing the relevant data into the Safelite cloud platform, um, is where we're giving time back, you know, that is now taking a few weeks, whereas that transitioned to mobilize that data, start with that, that new clean slate to build upon a new business as a, a digital crypto asset manager, as well as the legacy, traditional financial assets, bonds stocks, and fixed income assets, you name it, uh, is where we're starting to see a lot of innovation. >>Yeah. Tons of innovation. I love the crypto examples and FTS are exploding and, you know, let's face it, traditional banks are getting disrupted. Uh, and so I also love this notion of data RPA. I, especially because I've done a lot of work in the RPA space. And, and I want to, what I would observe is that the, the early days of RPA, I call it paving the cow path, taking existing processes and applying scripts, get letting software robots, you know, do its thing. And that was good because it reduced, you know, mundane tasks, but really where it's evolved is a much broader automation agenda. People are discovering new, new ways to completely transform their processes. And I see a similar, uh, analogy for data, the data operating model. So I'm wonder whenever you think about that, how a customer really gets started bringing this to their ecosystem, their data life cycles. >>Sure. Yeah. So step one is always the same is figuring out for the CIO, the chief data officer, what data do I have, um, and that's increasingly something that they want towards a mate, so we can help them there and, and do that automated data discovery, whether that is documents in the file, share backup archive in a relational data store, in a mainframe really quickly hydrating that and bringing that intelligence, the forefront of, of what do I have, and then it's the next step of, well, okay. Now I want to continually monitor and curate that intelligence with the platform that I've chosen. Let's say snowflake, um, in order such that I can then build applications on top of that platform to serve my, my internal, external customer needs and the automation around classifying data reconciliation across different fragmented data silos, building that in those insights into snowflake. >>Um, as you say, a little later on where we're talking about data quality, active DQ, allowing us to reconcile data from different sources, as well as look at the integrity of that data. Um, so they can go on to remediation, you know, I, I wanna, um, harness and leverage, um, techniques around traditional RPA. Um, but to get to that stage, I need to fix the data. So remediating publishing the data in snowflake, uh, allowing analysis to be formed performance snowflake. Th those are the key steps that we see and just shrinking that timeline into weeks, giving the organization that time back means they're spending more time on their customer and solving their customer's problem, which is where we want them to be. >>This is the brilliance of snowflake actually, you know, Duncan is, I've talked to him, then what does your view about this and your other co-founders and it's really that focus on simplicity. So, I mean, that's, you, you picked a good company to join my opinion. So, um, I wonder if you could, you know, talk about some of the industry sectors that are, again, going to gain the most from, from data RPA, I mean, traditional RPA, if I can use that term, you know, a lot of it was back office, a lot of, you know, financial w what are the practical applications where data RPA is going to impact, you know, businesses and, and the outcomes that we can expect. >>Yes, sir. So our drive is, is really to, to make that, um, business general user's experience of RPA simpler and, and using no code to do that, uh, where they've also chosen snowflake to build that their cloud platform. They've got the combination then of using a relatively simple script scripting techniques, such as SQL, uh, without no code approach. And the, the answer to your question is whichever sector is looking to mobilize their data. Uh, it seems like a cop-out, but to give you some specific examples, David, um, in banking where, uh, customers are looking to modernize their banking systems and enable better customer experience through, through applications and digital apps. That's where we're, we're seeing a lot of traction, uh, and this approach to, to pay RPA to data, um, health care, where there's a huge amount of work to do to standardize data sets across providers, payers, patients, uh, and it's an ongoing, um, process there for, for retail, um, helping to, to build that immersive customer experience. >>So recommending next best actions, um, providing an experience that is going to drive loyalty and retention, that's, that's dependent on understanding what that customer's needs intent, uh, being out to provide them with the content or the outfit at that point in time, or all data dependent utilities is another one great overlap there with, with snowflake where, you know, helping utilities, telecoms energy, water providers to build services on that data. And this is where the ecosystem just continues to, to expand. If we, if we're helping our customers turn their data into services for, for their ecosystem, that's, that's exciting. And they were more so exciting than insurance, which we always used to, um, think back to, uh, when insurance used to be very dull and mundane, actually, that's where we're seeing a huge amounts of innovation to create new flexible products that are priced to the day to the situation and, and risk models being adaptive when the data changes, uh, on, on events or circumstances. So across all those sectors that they're all mobilizing that data, they're all moving in some way, shape or form to a, a multi-cloud, um, set up with their it. And I think with, with snowflake and without Tahoe, being able to accelerate that and make that journey simple and as complex is, uh, is why we found such a good partner here. >>All right. Thanks for that. And then thank you guys. Both. We gotta leave it there. Uh, really appreciate Duncan you coming on and Aja best of luck with the fundraising. >>We'll keep you posted. Thanks, David. All right. Great. >>Okay. Now let's take a look at a short video. That's going to help you understand how to reduce the steps around your data ops. Let's watch.

Published Date : Apr 29 2021

SUMMARY :

intelligent automation for data quality brought to you by IO Tahoe. Tahoe is going to share his insight. Yeah, it's great to have you back Um, now of course bringing snowflake and it looks like you're really starting to build momentum. And then I can see that we run into a And you gotta hire the right salespeople, but, but what's different this time around, Uh, well, you know, the fundamentals that you mentioned though, those are never change. enable that CIO to make purchase while still preserving and in some And of course, uh, speaking of the business, depending on which of these silos they end up looking at and what you can do. uh, valuation, you know, snowflake like numbers, nice cops there for sure. We've kind of stepped back and said, well, you know, the resource that a snowflake can and you know, of course the, the competitors come out and maybe criticize why they don't have this feature. And we were kind of discussing maybe with their silos. the whole unprotected data set with each other, and this lets you to, you know, And you can only really do these kinds you know, obviously GDPR started it in the States, you know, California, consumer privacy act, insurance, the ability for a chief marketing officer to drive They, the ability for a CFO to reconcile accounts at the end of the month. I mean it, and, you know, I would say, I mean, it really is table stakes. extent is that the data cloud really facilitates the ability to share and gain access to this both kind Uh, you know, the snowflake approach really enables you to share data with your ecosystem all the world, So, Andrea, the key here is to don't get to talking about it in my mind. Uh, however, the Achilles heel there was, you know, the complexity So Duncan, the cloud obviously brought in this notion of dev ops, um, I agree with you absolutely. And this lets you kind of set up both your own kind So it's quick and easy, and you can also, again, with our separation of storage and compute, you can provision your own And this kind of gets you away from like a once a week or once a month change window, And frankly, the technology should be an implementation detail, not the dictating factor. the technologies more accessible to the general business users similar to what you drive process Terraform to build in, that as, you know, let me take away the good learnings from how to do um, is where we're giving time back, you know, that is now taking a And that was good because it reduced, you know, mundane tasks, that intelligence, the forefront of, of what do I have, and then it's the next step of, you know, I, I wanna, um, harness and leverage, um, This is the brilliance of snowflake actually, you know, Duncan is, I've talked to him, then what does your view about this and your but to give you some specific examples, David, um, the day to the situation and, and risk models being adaptive And then thank you guys. We'll keep you posted. That's going to help you understand how to reduce

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AndreaPERSON

0.99+

Duncan TurnbullPERSON

0.99+

Ajay VohoraPERSON

0.99+

DuncanPERSON

0.99+

20QUANTITY

0.99+

twoQUANTITY

0.99+

IOORGANIZATION

0.99+

BothQUANTITY

0.99+

OneQUANTITY

0.99+

first hundred daysQUANTITY

0.99+

SQLTITLE

0.99+

bothQUANTITY

0.99+

three thingsQUANTITY

0.98+

CaliforniaLOCATION

0.98+

five years agoDATE

0.98+

one thingQUANTITY

0.98+

25%QUANTITY

0.97+

TerraformORGANIZATION

0.97+

eachQUANTITY

0.97+

oneQUANTITY

0.96+

35 plus billion dollarsQUANTITY

0.96+

fiveDATE

0.96+

SantosORGANIZATION

0.96+

once a weekQUANTITY

0.95+

GDPRTITLE

0.95+

TahoePERSON

0.95+

once a monthQUANTITY

0.95+

consumer privacy actTITLE

0.94+

fourQUANTITY

0.94+

step oneQUANTITY

0.93+

IO TahoeORGANIZATION

0.93+

MORGANIZATION

0.91+

agileTITLE

0.91+

last six monthsDATE

0.91+

14 monthsQUANTITY

0.9+

singleQUANTITY

0.88+

six years agoDATE

0.88+

todayDATE

0.88+

Io-TahoeORGANIZATION

0.87+

12QUANTITY

0.84+

one of themQUANTITY

0.83+

AIG ViharaORGANIZATION

0.82+

One thingQUANTITY

0.8+

single wayQUANTITY

0.77+

last 10 yearsDATE

0.76+

TonsQUANTITY

0.75+

AgileTITLE

0.73+

yearsQUANTITY

0.73+

TahoeORGANIZATION

0.7+

TerraformTITLE

0.66+

every cloudQUANTITY

0.65+

DunkinORGANIZATION

0.61+

past 15, 20DATE

0.59+

TaoORGANIZATION

0.56+

SnowflakeORGANIZATION

0.56+

SafeliteORGANIZATION

0.54+

snowflakeTITLE

0.53+

Dunkin AAJPERSON

0.52+

peopleQUANTITY

0.51+

hatORGANIZATION

0.5+