Image Title

Search Results for Platform:

Is Supercloud an Architecture or a Platform | Supercloud2


 

(electronic music) >> Hi everybody, welcome back to Supercloud 2. I'm Dave Vellante with my co-host John Furrier. We're here at our tricked out Palo Alto studio. We're going live wall to wall all day. We're inserting a number of pre-recorded interviews, folks like Walmart. We just heard from Nir Zuk of Palo Alto Networks, and I'm really pleased to welcome in David Flynn. David Flynn, you may know as one of the people behind Fusion-io, completely changed the way in which people think about storing data, accessing data. David Flynn now the founder and CEO of a company called Hammerspace. David, good to see you, thanks for coming on. >> David: Good to see you too. >> And Dr. Nelu Mihai is the CEO and founder of Cloud of Clouds. He's actually built a Supercloud. We're going to get into that. Nelu, thanks for coming on. >> Thank you, Happy New Year. >> Yeah, Happy New Year. So I'm going to start right off with a little debate that's going on in the community if you guys would bring out this slide. So Bob Muglia early today, he gave a definition of Supercloud. He felt like we had to tighten ours up a little bit. He said a Supercloud is a platform, underscoring platform, that provides programmatically consistent services hosted on heterogeneous cloud providers. Now, Nelu, we have this shared doc, and you've been in there. You responded, you said, well, hold on. Supercloud really needs to be an architecture, or else we're going to have this stove pipe of stove pipes, really. And then you went on with more detail, what's the information model? What's the execution model? How are users going to interact with Supercloud? So I start with you, why architecture? The inference is that a platform, the platform provider's responsible for the architecture? Why does that not work in your view? >> No, the, it's a very interesting question. So whenever I think about platform, what's the connotation, you think about monolithic system? Yeah, I mean, I don't know whether it's true or or not, but there is this connotation of of monolithic. On the other hand, if you look at what's a problem right now with HyperClouds, from the customer perspective, they're very complex. There is a heterogeneous world where actually every single one of this HyperClouds has their own architecture. You need rocket scientists to build a cloud applications. Always there is this contradiction between cost and performance. They fight each other. And I'm quoting here a former friend of mine from Bell Labs who work at AWS who used to say "Cloud is cheap as long as you don't use it too much." (group chuckles) So clearly we need something that kind of plays from the principle point of view the role of an operating system, that seats on top of this heterogeneous HyperCloud, and there's nothing wrong by having these proprietary HyperClouds, think about processors, think about operating system and so on, so forth. But in order to build a system that is simple enough, I think we need to go deeper and understand. >> So the argument, the counterargument to that, David, is you'll never get there. You need a proprietary system to get to market sooner, to solve today's problem. Now I don't know where you stand on this platform versus architecture. I haven't asked you, but. >> I think there are aspects of both for sure. I mean it needs to be an architecture in the sense that it's broad based and open and so forth. But you know, platform, you could say as long as people can instantiate it themselves, on their own infrastructure, as long as it's something that can be deployed as, you know, software defined, you don't want the concept of platform being the monolith, you know, combined hardware and software. So it really depends on what you're focused on when you're saying platform, you know, I'd say as long as they software defined thing, to where it can literally run anywhere. I mean, because I really think what we're talking about here is the original concept of cloud computing. The ability to run anything anywhere, without having to care about the physical infrastructure. And what we have today is not that, the cloud today is a big mainframe in the sky, that just happens to be large enough that once you select which region, generally you have enough resources. But, you know, nowadays you don't even necessarily have enough resources in one region. and then you're kind of stuck. So we haven't really gotten to that utility model of computing. And you're also asked to rewrite your application, you know, to abandon the conveniences of high performance file access. You got to rewrite it to use object storage stuff. We have to get away from that. >> Okay, I want to just drill on that, 'cause I think I like that point about, there's not enough availability, but on the developer cloud, the original AWS premise was targeting developers, 'cause at that time, you have to provision a Sun box get a Cisco DSU/CSU, now you get on the cloud. But I think you're giving up the scale question, 'cause I think right now, scale is huge, enterprise grade versus cloud for developers. >> That's Right. >> Because I mean look at, Amazon, Azure, they got compute, they got storage, they got queuing, and some stuff. If you're doing a startup, you throw your app up there, localhost to cloud, no big deal. It's the scale thing that gets me- >> And you can tell by the fact that, in regions that are under high demand, right, like in London or LA, at least with the clients we work with in the median entertainment space, it costs twice as much for the exact same cloud instances that do the exact same amount of work, as somewhere out in rural Canada. So why is it you have such a cost differential, it has to do with that supply and demand, and the fact that the clouds aren't really the ability to run anything anywhere. Even within the same cloud vendor, you're stuck in a specific region. >> And that was never the original promise, right? I mean it was, we turned it into that. But the original promise was get rid of the heavy lifting of IT. >> Not have to run your own, yeah, exactly. >> And then it became, wow, okay I can run anywhere. And then you know, it's like web 2.0. You know people say why Supercloud, you and I talked about this, why do you need a name for Supercloud? It's like web 2.0. >> It's what Cloud was supposed to be. >> It's what cloud was supposed to be, (group laughing and talking) exactly, right. >> Cloud was supposed to be run anything anywhere, or at least that's what we took it as. But you're right, originally it was just, oh don't have to run your own infrastructure, and you can choose somebody else's infrastructure. >> And you did that >> But you're still bound to that. >> Dave: And People said I want more, right? >> But how do we go from here? >> That's, that's actually, that's a very good point, because indeed when the first HyperClouds were designed, were designed really focus on customers. I think Supercloud is an opportunity to design in the right way. Also having in mind the computer science rigor. And we should take advantage of that, because in fact actually, if cloud would've been designed properly from the beginning, probably wouldn't have needed Supercloud. >> David: You wouldn't have to have been asked to rewrite your application. >> That's correct. (group laughs) >> To use REST interfaces to your storage. >> Revisist history is always a good one. But look, cloud is great. I mean your point is cloud is a good thing. Don't hold it back. >> It is a very good thing. >> Let it continue. >> Let it go as as it is. >> Yeah, let that thing continue to grow. Don't impose restrictions on the cloud. Just refactor what you need to for scale or enterprise grade or availability. >> And you would agree with that, is that true or is it problem you're solving? >> Well yeah, I mean it, what the cloud is doing is absolutely necessary. What the public cloud vendors are doing is absolutely necessary. But what's been missing is how to provide a consistent interface, especially to persistent data. And have it be available across different regions, and across different clouds. 'cause data is a highly localized thing in current architecture. It only exists as rendered by the storage system that you put it in. Whether that's a legacy thing like a NetApp or an Isilon or even a cloud data service. It's localized to a specific region of the cloud in which you put that. We have to delocalize data, and provide a consistent interface to it across all sites. That's high performance, local access, but to global data. >> And so Walmart earlier today described their, what we call Supercloud, they call it the Walmart cloud native platform. And they use this triplet model. They have AWS and Azure, no, oh sorry, no AWS. They have Azure and GCP and then on-prem, where all the VMs live. When you, you know, probe, it turns out that it's only stateless in the cloud. (John laughs) So, the state stuff- >> Well let's just admit it, there is no such thing as stateless, because even the application binaries and libraries are state. >> Well I'm happy that I'm hearing that. >> Yeah, okay. >> Because actually I have a lot of debate (indistinct). If you think about no software running on a (indistinct) machine is stateless. >> David: Exactly. >> This is something that was- >> David: And that's data that needs to be distributed and provided consistently >> (indistinct) >> Across all the clouds, >> And actually, it's a nonsense, but- >> Dave: So it's an illusion, okay. (group talks over each other) >> (indistinct) you guys talk about stateless. >> Well, see, people make the confusion between state and persistent state, okay. Persistent state it's a different thing. State is a different thing. So, but anyway, I want to go back to your point, because there's a lot of debate here. People are talking about data, some people are talking about logic, some people are talking about networking. In my opinion is this triplet, which is data logic and connectivity, that has equal importance. And actually depending on the application, can have the center of gravity moving towards data, moving towards what I call execution units or workloads. And connectivity is actually the most important part of it. >> David: (indistinct). >> Some people are saying move the logic towards the data, some other people, and you are saying actually, that no, you have to build a distributed data mesh. What I'm saying is actually, you have to consider all these three variables, all these vector in order to decide, based on application, what's the most important. Because sometimes- >> John: So the application chooses >> That's correct. >> Well it it's what operating systems were in the past, was principally the thing that runs and manages the jobs, the job scheduler, and the thing that provides your persistent data (indistinct). >> Okay. So we finally got operating system into the equation, thank you. (group laughs) >> Nelu: I actually have a PhD in operating system. >> Cause what we're talking about is an operating system. So forget platform or architecture, it's an operating environment. Let's use it as a general term. >> All right. I think that's about it for me. >> All right, let's take (indistinct). Nelu, I want ask you quick, 'cause I want to give a, 'cause I believe it's an operating system. I think it's going to be a reset, refactored. You wrote to me, "The model of Supercloud has to be open theoretical, has to satisfy the rigors of computer science, and customer requirements." So unique to today, if the OS is going to be refactored, it's not going to be, may or may not be Red Hat or somebody else. This new OS, obviously requirements are for customers too but is what's the computer science that is needed? Where are we, what's the missing? Where's the science in this shift? It's not your standard OS it's not like an- (group talks over each other) >> I would beg to differ. >> (indistinct) truly an operation environment. But the, if you think about, and make analogies, what you need when you design a distributed system, well you need an information model, yeah. You need to figure out how the data is located and distributed. You need a model for the execution units, and you need a way to describe the interactions between all these objects. And it is my opinion that we need to go deeper and formalize these operations in order to make a step forward. And when we design Supercloud, and design something that is better than the current HyperClouds. And actually that is when we design something better, you make a system more efficient and it's going to be better from the cost point of view, from the performance point of view. But we need to add some math into all this customer focus centering and I really admire AWS and their executive team focusing on the customer. But now it's time to go back and see, if we apply some computer science, if you try to formalize to build a theoretical model of cloud, can we build a system that is better than existing ones? >> So David, how do you- >> this is what I'm saying. >> That's a good question >> How do You see the operating system of a, or operating environment of a decentralized cloud? >> Well I think it's layered. I mean we have operating systems that can run systems quite efficiently. Linux has sort of one in the data center, but we're talking about a layer on top of that. And I think we're seeing the emergence of that. For example, on the job scheduling side of things, Kubernetes makes a really good example. You know, you break the workload into the most granular units of compute, the containerized microservice, and then you use a declarative model to state what is needed and give the system the degrees of freedom that it can choose how to instantiate it. Because the thing about these distributed systems, is that the complexity explodes, right? Running a piece of hardware, running a single server is not a problem, even with all the many cores and everything like that. It's when you start adding in the networking, and making it so that you have many of them. And then when it's going across whole different data centers, you know, so, at that level the way you solve this is not manually (group laughs) and not procedurally. You have to change the language so it's intent based, it's a declarative model, and what you're stating is what is intended, and you're leaving it to more advanced techniques, like machine learning to decide how to instantiate that service across the cluster, which is what Kubernetes does, or how to instantiate the data across the diverse storage infrastructure. And that's what we do. >> So that's a very good point because actually what has been neglected with HyperClouds is really optimization and automation. But in order to be able to do both of these things, you need, I'm going back and I'm stubborn, you need to have a mathematical model, a theoretical model because what does automation mean? It means that we have to put machines to do the work instead of us, and machines work with what? Formula, with algorithms, they don't work with services. So I think Supercloud is an opportunity to underscore the importance of optimization and automation- >> Totally agree. >> In HyperCloud, and actually by doing that, we can also have an interesting connotation. We are also contributing to save our planet, because if you think right now. we're consuming a lot of energy on this HyperClouds and also all this AI applications, and I think we can do better and build the same kind of application using less energy. >> So yeah, great point, love that call out, the- you know, Dave and I always joke about the old, 'cause we're old, we talk about, you know, (Nelu Laughs) old history, OS/2 versus DOS, okay, OS's, OS/2 is silly better, first threaded OS, DOS never went away. So how does legacy play into this conversation? Because I buy the theoretical, I love the conversation. Okay, I think it's an OS, totally see it that way myself. What's the blocker? Is there a legacy that drags it back? Is the anchor dragging from legacy? Is there a DOS OS/2 moment? Is there an opportunity to flip the script? This is- >> I think that's a perfect example of why we need to support the existing interfaces, Operating Systems, real operating systems like Linux, understands how to present data, it's called a file system, block devices, things that that plumb in there. And by, you know, going to a REST interface and S3 and telling people they have to rewrite their applications, you can't even consume your application binaries that way, the OS doesn't know how to pull that sort of thing. So we, to get to cloud, to get to the ability to host massive numbers of tenants within a centralized infrastructure, you know, we abandoned these lower level interfaces to the OS and we have to go back to that. It's the reason why DOS ultimately won, is it had the momentum of the install base. We're seeing the same thing here. Whatever it is, it has to be a real file system and not a come down file system >> Nelu, what's your reaction, 'cause you're in the theoretical bandwagon. Let's get your reaction. >> No, I think it's a good, I'll give, you made a good analogy between OS/2 and DOS, but I'll go even farther saying, if you think about the evolution operating system didn't stop the evolution of underlying microprocessors, hardware, and so on and so forth. On the contrary, it was a catalyst for that. So because everybody could develop their own hardware, without worrying that the applications on top of operating system are going to modify. The same thing is going to happen with Supercloud. You're going to have the AWSs, you're going to have the Azure and the the GCP continue to evolve in their own way proprietary. But if we create on top of it the right interface >> The open, this is why open is important. >> That's correct, because actually you're going to see sometime ago, everybody was saying, remember venture capitals were saying, "AWS killed the world, nobody's going to come." Now you see what Oracle is doing, and then you're going to see other players. >> It's funny, Amazon's trying to be more like Microsoft. Microsoft's trying to be more like Amazon and Google- Oracle's just trying to say they have cloud. >> That's, that's correct, (group laughs) so, my point is, you're going to see a multiplication of this HyperClouds and cloud technology. So, the system has to be open in order to accommodate what it is and what is going to come. Okay, so it's open. >> So the the legacy- so legacy is an opportunity, not a blocker in your mind. And you see- >> That's correct, I think we should allow them to continue to to to be their own actually. But maybe you're going to find a way to connect with it. >> Amazon's the processor, and they're on the 80 80 80 right? >> That's correct. >> You're saying you love people trying to get put to work. >> That's a good analogy. >> But, performance levels you say good luck, right? >> Well yeah, we have to be able to take traditional applications, high performance applications, those that consume file system and persistent data. Those things have to be able to run anywhere. You need to be able to put, put them onto, you know, more elastic infrastructure. So, we have to actually get cloud to where it lives up to its billing. >> And that's what you're solving for, with Hammerspace, >> That's what we're solving for, making it possible- >> Give me the bumper sticker. >> Solving for how do you have massive quantities of unstructured file data? At the end of the day, all data ultimately is unstructured data. Have that persistent data available, across any data center, within any cloud, within any region on-prem, at the edge. And have not just the same APIs, but have the exact same data sets, and not sucked over a straw remote, but at extreme high performance, local access. So how do you have local access to globally shared distributed data? And that's what we're doing. We are orchestrating data globally across all different forms of storage infrastructure, so you have a consistent access at the highest performance levels, at the lowest level innate built into the OS, how to consume it as (indistinct) >> So are you going into the- all the clouds and natively building in there, or are you off cloud? >> So This is software that can run on cloud instances and provide high performance file within the cloud. It can take file data that's on-prem. Again, it's software, it can run in virtual or on physical servers. And it abstracts the data from the existing storage infrastructure, and makes the data visible and consumable and orchestratable across any of it. >> And what's the elevator pitch for Cloud of Cloud, give that too. >> Well, Cloud of Clouds creates a theoretical model of cloud, and it describes every single object in the cloud. Where is data, execution units, and connectivity, with one single class of very simple object. And I can, I can give you (indistinct) >> And the problem that solves is what? >> The problem that solves is, it creates this mathematical model that is necessary in order to do other interesting things, such as optimization, using sata engines, using automation, applying ML for instance. Or deep learning to automate all this clouds, if you think about in the industrial field, we know how to manage and automate huge plants. Why wouldn't it do the same thing in cloud? It's the same thing you- >> That's what you mean by theoretical model. >> Nelu: That's correct. >> Lay out the architecture, almost the bones of skeleton or something, or, and then- >> That's correct, and then on top of it you can actually build a platform, You can create your services, >> when you say math, you mean you put numbers to it, you kind of index it. >> You quantify this thing and you apply mathematical- It's really about, I can disclose this thing. It's really about describing the cloud as a knowledge graph for every single object in the graph for node, an edge is a vector. And then once you have this model, then you can apply the field theory, and linear algebra to do operation with these vectors. And it's, this creates a very interesting opportunity to let the math do this thing for us. >> Okay, so what happens with hyperscale, or it's like AWS in your model. >> So in, in my model actually, >> Are they happy with this, or they >> I'm very happy with that. >> Will they be happy with you? >> We create an interface to every single HyperCloud. We actually, we don't need to interface with the thousands of APIs, but you know, if we have the 80 20 rule, and we map these APIs into this graph, and then every single operation that is done in this graph is done from the beginning, in an optimized manner and also automation ready. >> That's going to be great. David, I want us to go back to you before we close real quick. You've had a lot of experience, multiple ventures on the front end. You talked to a lot of customers who've been innovating. Where are the classic (indistinct)? Cause you, you used to sell and invent product around the old school enterprises with storage, you know that that trajectory storage is still critical to store the data. Where's the classic enterprise grade mindset right now? Those customers that were buying, that are buying storage, they're in the cloud, they're lifting and shifting. They not yet put the throttle on DevOps. When they look at this Supercloud thing, Are they like a deer in the headlights, or are they like getting it? What's the, what's the classic enterprise look like? >> You're seeing people at different stages of adoption. Some folks are trying to get to the cloud, some folks are trying to repatriate from the cloud, because they've realized it's better to own than to rent when you use a lot of it. And so people are at very different stages of the journey. But the one thing that's constant is that there's always change. And the change here has to do with being able to change the location where you're doing your computing. So being able to support traditional workloads in the cloud, being able to run things at the edge, and being able to rationalize where the data ought to exist, and with a declarative model, intent-based, business objective-based, be able to swipe a mouse and have the data get redistributed and positioned across different vendors, across different clouds, that, we're seeing that as really top of mind right now, because everybody's at some point on this journey, trying to go somewhere, and it involves taking their data with them. (John laughs) >> Guys, great conversation. Thanks so much for coming on, for John, Dave. Stay tuned, we got a great analyst power panel coming right up. More from Palo Alto, Supercloud 2. Be right back. (bouncy music)

Published Date : Jan 18 2023

SUMMARY :

and I'm really pleased to And Dr. Nelu Mihai is the CEO So I'm going to start right off On the other hand, if you look at what's So the argument, the of platform being the monolith, you know, but on the developer cloud, It's the scale thing that gets me- the ability to run anything anywhere. of the heavy lifting of IT. Not have to run your And then you know, it's like web 2.0. It's what Cloud It's what cloud was supposed to be, and you can choose somebody bound to that. Also having in mind the to rewrite your application. That's correct. I mean your point is Yeah, let that thing continue to grow. of the cloud in which you put that. So, the state stuff- because even the application binaries If you think about no software running on Dave: So it's an illusion, okay. (indistinct) you guys talk And actually depending on the application, that no, you have to build the job scheduler, and the thing the equation, thank you. a PhD in operating system. about is an operating system. I think I think it's going to and it's going to be better at that level the way you But in order to be able to and build the same kind of Because I buy the theoretical, the OS doesn't know how to Nelu, what's your reaction, of it the right interface The open, this is "AWS killed the world, to be more like Microsoft. So, the system has to be open So the the legacy- to continue to to to put to work. You need to be able to put, And have not just the same APIs, and makes the data visible and consumable for Cloud of Cloud, give that too. And I can, I can give you (indistinct) It's the same thing you- That's what you mean when you say math, and linear algebra to do Okay, so what happens with hyperscale, the thousands of APIs, but you know, the old school enterprises with storage, and being able to rationalize Stay tuned, we got a

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

JohnPERSON

0.99+

NeluPERSON

0.99+

David FlynnPERSON

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LondonLOCATION

0.99+

John FurrierPERSON

0.99+

LALOCATION

0.99+

Bob MugliaPERSON

0.99+

OS/2TITLE

0.99+

Nir ZukPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HammerspaceORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Bell LabsORGANIZATION

0.99+

Nelu MihaiPERSON

0.99+

DOSTITLE

0.99+

AWSsORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

twiceQUANTITY

0.99+

CiscoORGANIZATION

0.99+

todayDATE

0.99+

CanadaLOCATION

0.99+

bothQUANTITY

0.99+

Palo AltoLOCATION

0.99+

SupercloudORGANIZATION

0.99+

Nelu LaughsPERSON

0.98+

thousandsQUANTITY

0.98+

firstQUANTITY

0.97+

LinuxTITLE

0.97+

HyperCloudTITLE

0.97+

Cloud of CloudTITLE

0.97+

oneQUANTITY

0.96+

Cloud of CloudsORGANIZATION

0.95+

GCPTITLE

0.95+

AzureTITLE

0.94+

three variablesQUANTITY

0.94+

one single classQUANTITY

0.94+

single serverQUANTITY

0.94+

tripletQUANTITY

0.94+

one regionQUANTITY

0.92+

NetAppTITLE

0.92+

DOS OS/2TITLE

0.92+

AzureORGANIZATION

0.92+

earlier todayDATE

0.92+

Cloud of CloudsTITLE

0.91+

Tim Yocum, Influx Data | Evolving InfluxDB into the Smart Data Platform


 

(soft electronic music) >> Okay, we're back with Tim Yocum who is the Director of Engineering at InfluxData. Tim, welcome, good to see you. >> Good to see you, thanks for having me. >> You're really welcome. Listen, we've been covering opensource software on theCUBE for more than a decade and we've kind of watched the innovation from the big data ecosystem, the cloud is being built out on opensource, mobile, social platforms, key databases, and of course, InfluxDB. And InfluxData has been a big consumer and crontributor of opensource software. So my question to you is where have you seen the biggest bang for the buck from opensource software? >> So yeah, you know, Influx really, we thrive at the intersection of commercial services and opensource software, so OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services, templating engines. Our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants. And like you've mentioned, even better, we contribute a lot back to the projects that we use, as well as our own product InfluxDB. >> But I got to ask you, Tim, because one of the challenge that we've seen, in particular, you saw this in the heyday of Hadoop, the innovations come so fast and furious, and as a software company, you got to place bets, you got to commit people, and sometimes those bets can be risky and not pay off. So how have you managed this challenge? >> Oh, it moves fast, yeah. That's a benefit, though, because the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we tend to do is we fail fast and fail often; we try a lot of things. You know, you look at Kubernetes, for example. That ecosystem is driven by thousands of intelligent developers, engineers, builders. They're adding value every day, so we have to really keep up with that. And as the stack changes, we try different technologies, we try different methods. And at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's something that we just do every day. >> So we have a survey partner down in New York City called Enterprise Technology Research, ETR, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes, is one of the areas that is kind of, it's been off the charts and seen the most significant adoption and velocity particularly along with cloud, but really, Kubernetes is just, you know, still up and to the right consistently, even with the macro headwinds and all of the other stuff that we're sick of talking about. So what do you do with Kubernetes in the platform? >> Yeah, it's really central to our ability to run the product. When we first started out, we were just on AWS and the way we were running was a little bit like containers junior. Now we're running Kubernetes everywhere at AWS, Azure, Google cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code. So our developers can focus on delivering services not trying to learn the intricacies of Amazon, Azure, and Google, and figure out how to deliver services on those three clouds with all of their differences. >> Just a followup on that, is it now, so I presume it sounds like there's a PaaS layer there to allow you guys to have a consistent experience across clouds and out to the edge, wherever. Is that correct? >> Yeah, so we've basically built more or less platform engineering is this the new, hot phrase. Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that just gets all of the underlying infrastructure out of the way and lets them focus on delivering Influx cloud. >> And I know I'm taking a little bit of a tangent, but is that, I'll call it a PaaS layer, if I can use that term, are there specific attributes to InfluxDB or is it kind of just generally off-the-shelf PaaS? Is there any purpose built capability there that is value-add or is it pretty much generic? >> So we really build, we look at things with a build versus buy, through a build versus buy lens. Some things we want to leverage, cloud provider services, for instance, POSTGRES databases for metadata, perhaps. Get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can deliver on, that has consistency, that is all generated from code. that we can, as an SRE group, as an OPS team, that we can manage with very few people, really, and we can stamp out clusters across multiple regions in no time. >> So sometimes you build, sometimes you buy it. How do you make those decisions and what does that mean for the platform and for customers? >> Yeah, so what we're doing is, it's like everybody else will do. We're looking for trade-offs that make sense. We really want to protect our customers' data, so we look for services that support our own software with the most up-time reliability and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team and of course, for our customers; you don't even see that. But we don't want to try to reinvent the wheel, like I had mentioned with SQL datasource for metadata, perhaps. Let's build on top of what of these three large cloud providers have already perfected and we can then focus on our platform engineering and we can help our developers then focus on the InfluxData software, the Influx cloud software. >> So take it to the customer level. What does it mean for them, what's the value that they're going to get out of all these innovations that we've been talking about today, and what can they expect in the future? >> So first of all, people who use the OSS product are really going to be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across over four billion series keys that people have stored, so there's a proven ability to scale. Now in terms of the opensource software and how we've developed the platform, you're getting highly available, high cardinality time-series platform. We manage it and really, as I had mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in realtime. We deploy to our platform every day, repeatedly, all the time. And it's that continuous deployment that allow us to continue testing things in flight, rolling things out that change, new features, better ways of doing deployments, safer ways of doing deployments. All of that happens behind the scenes and like we had mentioned earllier, Kubernetes, I mean, that allows us to get that done. We couldn't do it without having that platform as a base layer for us to then put our software on. So we iterate quickly. When you're on the Influx cloud platform, you really are able to take advantage of new features immediately. We roll things out every day and as those things go into production, you have the ability to use them. And so in the then, we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let us do that for you. >> That makes sense. Are the innovations that we're talking about in the evolution of InfluxDB, do you see that as sort of a natural evolution for existing customers? Is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >> Yeah, it really is. It's a little bit of both. Any engineer will say, "Well it depends." So cloud-native technologies are really the hot thing, IoT, industrial IoT especially. People want to just shove tons of data out there and be able to do queries immediately and they don't want to manage infrastructure. What we've started to see are people that use the cloud service as their datastore backbone and then they use edge computing with our OSS product to ingest data from say, multiple production lines, and down-sample that data, send the rest of that data off to Influx cloud where the heavy processing takes place. So really, us being in all the different clouds and iterating on that, and being in all sorts of different regions, allows for people to really get out of the business of trying to manage that big data, have us take care of that. And, of course, as we change the platform, endusers benefit from that immediately. >> And so obviously you've taken away a lot of the heavy lifting for the infrastructure. Would you say the same things about security, especially as you go out to IoT at the edge? How should we be thinking about the value that you bring from a security perspective? >> We take security super seriously. It's built into our DNA. We do a lot of work to ensure that our platform is secure, that the data that we store is kept private. It's, of course, always a concern, you see in the news all the time, companies being compromised. That's something that you can have an entire team working on which we do, to make sure that the data that you have, whether it's in transit, whether it's at rest is always kept secure, is only viewable by you. You look at things like software bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software and we do that, you know, as we use new tools. That's something, that's just part of our jobs to make sure that the platform that we're running has fully vetted software. And you know, with opensource especially, that's a lot of work, and so it's definitely new territory. Supply chain attacks are definitely happening at a higher clip that they used to but that is really just part of a day in the life for folks like us that are building platforms. >> And that's key, especially when you start getting into the, you know, that we talk about IoT and the operations technologies, the engineers running that infrastrucutre. You know, historically, as you know, Tim, they would air gap everything; that's how they kept it safe. But that's not feasible anymore. Everything's-- >> Can't do that. >> connected now, right? And so you've got to have a partner that is, again, take away that heavy lifting to R&D so you can focus on some of the other activities. All right, give us the last word and the key takeaways from your perspective. >> Well, you know, from my perspective, I see it as a two-lane approach, with Influx, with any time-series data. You've got a lot of stuff that you're going to run on-prem. What you had mentioned, air gapping? Sure, there's plenty of need for that. But at the end of the day, people that don't want to run big datacenters, people that want to entrust their data to a company that's got a full platform set up for them that they can build on, send that data over to the cloud. The cloud is not going away. I think a more hybrid approach is where the future lives and that's what we're prepared for. >> Tim, really appreciate you coming to the program. Great stuff, good to see you. >> Thanks very much, appreciate it. >> Okay in a moment, I'll be back to wrap up today's session. You're watching theCUBE. (soft electronic music)

Published Date : Nov 8 2022

SUMMARY :

the Director of Engineering at InfluxData. So my question to you back to the projects that we use, in the heyday of Hadoop, And at the end of the day, we and all of the other stuff and the way we were and out to the edge, wherever. And so that just gets all of that we can manage with for the platform and for customers? and we can then focus on that they're going to get And so in the then, we want you to focus about in the evolution of InfluxDB, and down-sample that data, that you bring from a that the data that you have, and the operations technologies, and the key takeaways that data over to the cloud. you coming to the program. to wrap up today's session.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YocumPERSON

0.99+

TimPERSON

0.99+

InfluxDataORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

New York CityLOCATION

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

two-laneQUANTITY

0.99+

thousandsQUANTITY

0.99+

tomorrowDATE

0.98+

todayDATE

0.98+

more than a decadeQUANTITY

0.98+

270 terabytesQUANTITY

0.98+

InfluxDBTITLE

0.98+

oneQUANTITY

0.97+

about 1500 CIOsQUANTITY

0.97+

InfluxORGANIZATION

0.96+

AzureORGANIZATION

0.94+

one wayQUANTITY

0.93+

single serverQUANTITY

0.93+

firstQUANTITY

0.92+

PaaSTITLE

0.92+

KubernetesTITLE

0.91+

Enterprise Technology ResearchORGANIZATION

0.91+

KubernetesORGANIZATION

0.91+

three cloudsQUANTITY

0.9+

ETRORGANIZATION

0.89+

tons of dataQUANTITY

0.87+

rsusORGANIZATION

0.87+

HadoopTITLE

0.85+

over four billion seriesQUANTITY

0.85+

three large cloud providersQUANTITY

0.74+

three different cloud providersQUANTITY

0.74+

theCUBEORGANIZATION

0.66+

SQLTITLE

0.64+

opensourceORGANIZATION

0.63+

intelligent developersQUANTITY

0.57+

POSTGRESORGANIZATION

0.52+

earllierORGANIZATION

0.5+

AzureTITLE

0.49+

InfluxDBOTHER

0.48+

cloudTITLE

0.4+

Anais Dotis Georgiou, InfluxData | Evolving InfluxDB into the Smart Data Platform


 

>>Okay, we're back. I'm Dave Valante with The Cube and you're watching Evolving Influx DB into the smart data platform made possible by influx data. Anna East Otis Georgio is here. She's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into realtime analytics. Anna is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IO X is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory, of course for speed. It's a kilo store, so it gives you compression efficiency, it's gonna give you faster query speeds, it gonna use store files and object storages. So you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOCs is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so a lot there. Now we talked to Brian about how you're using Rust and and which is not a new programming language and of course we had some drama around Russ during the pandemic with the Mozilla layoffs, but the formation of the Russ Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Rust was chosen because of his exceptional performance and rebi reliability. So while rust is synt tactically similar to c c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on card for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ, Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fixed race conditions to protect against buffering overflows and to ensure thread safe ay caching structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learned about the the new engine and the, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you're really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data and so much of the efficiency and performance of IOCs comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of illustrate why calmer data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then neighbor each other and when they neighbor each other in the storage format. This provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the min and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one times stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, calmer data fit framework. So that's where a lot of the advantages come >>From. Okay. So you've basically described like a traditional database, a row approach, but I've seen like a lot of traditional databases say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native it, is it not as effective as the, is the form not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. >>Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in rust, but what does it bring to to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps influx DB IOx is that okay, it's great if you can write unlimited amount of cardinality into influx cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PDA's data frames as well and all of the machine learning tools associated with pandas. >>Okay. You're also leveraging par K in the platform course. We heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par K and why is it important? >>Sure. So Par K is the calm oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and pandas so it supports a broader ecosystem. Parque files also take very little disc disc space and they're faster to scan because again they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and these, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call it the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOCs and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and I just wanna learn more, then I would encourage you to go to the monthly tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the influx D DB underscore IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about IOCs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how influx TB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and you guys super responsive, so really appreciate that. All right, thank you so much and East for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yokum. He's the director of engineering for Influx Data and we're gonna talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't wanna miss this.

Published Date : Nov 8 2022

SUMMARY :

to increase the granularity of time series analysis analysis and bring the world of data Hi, thank you so much. So you got very cost effective approach. it aims to have no limits on cardinality and also allow you to write any kind of event data that So lots of platforms, lots of adoption with rust, but why rust as an all the fine grain control, you need to take advantage of even to even today you do a lot of garbage collection in these, in these systems and And so you can picture this table where we have like two rows with the two temperature values for order to answer that question and you have those immediately available to you. to pluck out that one temperature value that you want at that one times stamp and do that for every about is really, you know, kind of native it, is it not as effective as the, Yeah, it's, it's not as effective because you have more expensive compression and because So let's talk about Arrow data fusion. It also has a PANDAS API so that you could take advantage of What are you doing with So it's important What's the value that you're bringing to the community? here is that the more you contribute and build those up, then the kind of summarize, you know, where what, what the big takeaways are from your perspective. So if there's a particular technology or stack that you wanna dive deeper into and want and you guys super responsive, so really appreciate that. I really appreciate it. Influx Data and we're gonna talk about how you update a SaaS engine while

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YokumPERSON

0.99+

Jeff FrickPERSON

0.99+

BrianPERSON

0.99+

AnnaPERSON

0.99+

James BellengerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

three monthsQUANTITY

0.99+

16 timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

PythonTITLE

0.99+

mobile.twitter.comOTHER

0.99+

Influx DataORGANIZATION

0.99+

iOSTITLE

0.99+

TwitterORGANIZATION

0.99+

30,000 feetQUANTITY

0.99+

Russ FoundationORGANIZATION

0.99+

ScalaTITLE

0.99+

Twitter LiteTITLE

0.99+

two rowsQUANTITY

0.99+

200 megabyteQUANTITY

0.99+

NodeTITLE

0.99+

Three months agoDATE

0.99+

one applicationQUANTITY

0.99+

both placesQUANTITY

0.99+

each rowQUANTITY

0.99+

Par KTITLE

0.99+

Anais Dotis GeorgiouPERSON

0.99+

one languageQUANTITY

0.98+

first oneQUANTITY

0.98+

15 engineersQUANTITY

0.98+

Anna East Otis GeorgioPERSON

0.98+

bothQUANTITY

0.98+

one secondQUANTITY

0.98+

25 engineersQUANTITY

0.98+

About 800 peopleQUANTITY

0.98+

sqlTITLE

0.98+

Node Summit 2017EVENT

0.98+

two temperature valuesQUANTITY

0.98+

one timesQUANTITY

0.98+

c plus plusTITLE

0.97+

RustTITLE

0.96+

SQLTITLE

0.96+

todayDATE

0.96+

InfluxORGANIZATION

0.95+

under 600 kilobytesQUANTITY

0.95+

firstQUANTITY

0.95+

c plus plusTITLE

0.95+

ApacheORGANIZATION

0.95+

par KTITLE

0.94+

ReactTITLE

0.94+

RussORGANIZATION

0.94+

About three months agoDATE

0.93+

8:30 AM Pacific timeDATE

0.93+

twitter.comOTHER

0.93+

last decadeDATE

0.93+

NodeORGANIZATION

0.92+

HadoopTITLE

0.9+

InfluxDataORGANIZATION

0.89+

c c plus plusTITLE

0.89+

CubeORGANIZATION

0.89+

each columnQUANTITY

0.88+

InfluxDBTITLE

0.86+

Influx DBTITLE

0.86+

MozillaORGANIZATION

0.86+

DB IOxTITLE

0.85+

Brian Gilmore, Influx Data | Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now, in this program, we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program, you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think, like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean, if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems. Certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean, commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away. Just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean, we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is, you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like, take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and, you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally, I would just say please, like watch in ice in Tim's sessions, Like these are two of our best and brightest. They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time, really hot area. As Brian said in a moment, I'll be right back with Anna East Dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't want to miss this.

Published Date : Nov 8 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. who are using out on a, on a daily basis, you know, and having that sort of big shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, results in, in, you know, milliseconds of time since it hit the, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try you know, the risk of, of, you know, any issues that can come with new software rollouts. And you can do some experimentation and, you know, using the cloud resources. but you know, when it came to this particular new engine, you know, that power performance really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is, you know, really starting to hit that steep part of the S-curve. going out and, you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. the critical aspects of key open source components of the Influx DB engine,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YokumPERSON

0.99+

DavePERSON

0.99+

Dave ValantePERSON

0.99+

BrianPERSON

0.99+

TimPERSON

0.99+

60,000 peopleQUANTITY

0.99+

InfluxORGANIZATION

0.99+

todayDATE

0.99+

BryanPERSON

0.99+

twoQUANTITY

0.99+

twiceQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

three years agoDATE

0.99+

Influx DBTITLE

0.99+

Influx DataORGANIZATION

0.99+

tomorrowDATE

0.98+

ApacheORGANIZATION

0.98+

Anna East Dos GeorgioPERSON

0.98+

IOTORGANIZATION

0.97+

oneQUANTITY

0.97+

In Flux DataORGANIZATION

0.96+

InfluxTITLE

0.95+

The CubeORGANIZATION

0.95+

tonsQUANTITY

0.95+

CubeORGANIZATION

0.94+

RustTITLE

0.93+

both enterprisesQUANTITY

0.92+

iot TTITLE

0.91+

secondQUANTITY

0.89+

GoTITLE

0.88+

two thumbsQUANTITY

0.87+

Anna EastPERSON

0.87+

ParqueTITLE

0.85+

a minute agoDATE

0.84+

Influx StateORGANIZATION

0.83+

Dos GeorgioORGANIZATION

0.8+

influx dataORGANIZATION

0.8+

Apache ArrowORGANIZATION

0.76+

GitHubORGANIZATION

0.75+

BryanLOCATION

0.74+

phase oneQUANTITY

0.71+

past MayDATE

0.69+

GoORGANIZATION

0.64+

number twoQUANTITY

0.64+

millisecond agoDATE

0.61+

InfluxDBTITLE

0.6+

TimeTITLE

0.55+

industrialQUANTITY

0.54+

phase twoQUANTITY

0.54+

ParqueCOMMERCIAL_ITEM

0.53+

coupleQUANTITY

0.5+

timeTITLE

0.5+

thingsQUANTITY

0.49+

TSIORGANIZATION

0.4+

ArrowTITLE

0.38+

PARQUEOTHER

0.3+

Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 2 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Evolving InfluxDB into the Smart Data Platform Full Episode


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YoakumPERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

Dave ValantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

GoogleORGANIZATION

0.99+

16 timesQUANTITY

0.99+

two rowsQUANTITY

0.99+

New York CityLOCATION

0.99+

60,000 peopleQUANTITY

0.99+

RustTITLE

0.99+

InfluxORGANIZATION

0.99+

Influx DataORGANIZATION

0.99+

todayDATE

0.99+

Influx DataORGANIZATION

0.99+

PythonTITLE

0.99+

three expertsQUANTITY

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

each rowQUANTITY

0.99+

two laneQUANTITY

0.99+

TodayDATE

0.99+

Noble nineORGANIZATION

0.99+

thousandsQUANTITY

0.99+

FluxORGANIZATION

0.99+

Influx DBTITLE

0.99+

each columnQUANTITY

0.99+

270 terabytesQUANTITY

0.99+

cube.netOTHER

0.99+

twiceQUANTITY

0.99+

BryanPERSON

0.99+

PandasTITLE

0.99+

c plus plusTITLE

0.99+

three years agoDATE

0.99+

twoQUANTITY

0.99+

more than a decadeQUANTITY

0.98+

ApacheORGANIZATION

0.98+

dozensQUANTITY

0.98+

free@influxdbu.comOTHER

0.98+

30,000 feetQUANTITY

0.98+

Rust FoundationORGANIZATION

0.98+

two temperature valuesQUANTITY

0.98+

In Flux DataORGANIZATION

0.98+

one time stampQUANTITY

0.98+

tomorrowDATE

0.98+

RussPERSON

0.98+

IOTORGANIZATION

0.98+

Evolving InfluxDBTITLE

0.98+

firstQUANTITY

0.97+

Influx dataORGANIZATION

0.97+

oneQUANTITY

0.97+

first oneQUANTITY

0.97+

Influx DB UniversityORGANIZATION

0.97+

SQLTITLE

0.97+

The CubeTITLE

0.96+

Influx DB CloudTITLE

0.96+

single serverQUANTITY

0.96+

KubernetesTITLE

0.96+

Evolving InfluxDB into the Smart Data Platform Close


 

>> Okay, so we heard today from three experts on time series and data, how the InfluxDB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in realtime. And we learned that key open source components like Apache Arrow and the Rust Programming environment DataFusion parquet are being leveraged to support realtime data analytics at scale. We also learned about the contributions and importance of open source software and how the InfluxDB community is evolving the platform with minimal disruption to support new workloads, new use cases in the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to thecube.net to find those. Don't forget to check out siliconangle.com for all the news related to things enterprise and emerging tech. And you should also check out influxdata.com. There you can learn about the company's products, you'll find developer resources like free courses, you can join the developer community and work with your peers to learn and solve problems, and there are plenty of other resources around use cases and customer stories on the website. This is Dave Vellante. Thank you for watching Evolving InfluxDB into the Smart Data Platform, made possible by InfluxData and brought to you by theCUBE, your leader in enterprise and emerging tech coverage.

Published Date : Oct 18 2022

SUMMARY :

and how the InfluxDB community

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

three expertsQUANTITY

0.99+

thecube.netOTHER

0.99+

siliconangle.comOTHER

0.99+

InfluxDBTITLE

0.99+

todayDATE

0.99+

influxdata.comOTHER

0.98+

theCUBEORGANIZATION

0.95+

InfluxDataORGANIZATION

0.85+

EvolvingTITLE

0.79+

RustTITLE

0.62+

Apache ArrowORGANIZATION

0.54+

DataFusionTITLE

0.48+

Evolving InfluxDB into the Smart Data Platform Open


 

>> This past May, the Cube, in collaboration with Influx Data shared with you the latest innovations in Time series databases. We talked at length about why a purpose-built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember that time series data is any data that's stamped in time and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community we talked about how in theory those time slices could be taken, you know every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors, and other devices and IOT equipment. Time series databases have had to evolve to efficiently support realtime data in emerging use, use cases in IOT and other use cases. And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the Smart Data platform, made possible by influx data and produced by the cube. My name is Dave Vellante, and I'll be your host today. Now, in this program, we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're going to hear from Brian Gilmore who is the director of IOT and emerging technologies at Influx Data. And we're going to talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program, you're going to hear a lot about things like rust implementation of Apache Arrow, the use of Parquet and tooling such as data fusion, which are powering a new engine for Influx db. Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices if you will, from, for example minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're going to hear from Anais Dotis-Georgiou who is a developer advocate at Influx Data. And we're going to get into the "why's" of these open source capabilities, and how they contribute to the evolution of the Influx DB platform. And then we're going to close the program with Tim Yocum. He's the director of engineering at Influx Data, and he's going to explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started.

Published Date : Oct 18 2022

SUMMARY :

by compressing the historical time slices

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

Tim YocumPERSON

0.99+

Influx DataORGANIZATION

0.99+

Anais Dotis-GeorgiouPERSON

0.99+

Influx DBTITLE

0.99+

InfluxDBTITLE

0.94+

firstQUANTITY

0.91+

todayDATE

0.88+

secondQUANTITY

0.85+

TimeTITLE

0.82+

ParquetTITLE

0.76+

ApacheORGANIZATION

0.75+

past MayDATE

0.75+

InfluxTITLE

0.75+

IOTORGANIZATION

0.69+

CubeORGANIZATION

0.65+

influxORGANIZATION

0.53+

ArrowTITLE

0.48+

Breaking Analysis: How CrowdStrike Plans to Become a Generational Platform


 

>> From theCUBE studios in Palo Alto in Boston bringing you data driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> In just over 10 years, CrowdStrike has become a leading independent security firm with more than 2 billion in annual recurring revenue, nearly 60% ARR growth, and approximate $40 billion market capitalization, very high retention rates, low churn, and a path to 5 billion in revenue by mid decade. The company has joined Palo Alto Networks as a gold standard pure play cyber security firm. It has achieved this lofty status with an architecture that goes beyond a point product. With outstanding go to market and financial execution, some sharp acquisitions and an ever increasing total available market. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this "Breaking Analysis" and ahead of Falcon, Fal.Con, CrowdStrike's user conference, we take a deeper look into CrowdStrike, its performance, its platform, and survey data from our partner ETR. Now, the general consensus is that spending on Cyber is non-discretionary and is held up better than other technology sectors. While this is generally true, as this data shows, it's nuanced. Let's explore this a bit. First, this is a year-to-date chart of the stock performance of CrowdStrike relative to Palo Alto, the BUG ETF, which is a Cyber index, the NASDAQ and SentinelOne, a relatively new entrant to the IPO public markets. Now, as you can see the security sector as evidenced by the orange line, that Cyber ETF, is holding up better than the overall NASDAQ which is off 28% year-to-date. Palo Alto has held up incredibly well, the best, being off only around 4% year-to-date. Whereas CrowdStrike is off in the double digits this year. But up as we talked about in one of our last "Breaking Analysis" on Cyber, up from its lows this past May. Now, CrowdStrike had a very nice beat and raise on August 30th. But the stop didn't respond well initially. We asked "Breaking Analysis" contributor, Chip Simonton for his technical take and he stated that CrowdStrike has bounced around for the last three months in its current range. He said that Cyber stocks have held up better than the rest of the market, as we're showing. And now might be a good time to take a shot but he is cautious. FedEx had a warning today of a global recession and that's obvious case for a concern. You know, maybe some of these quality Cyber stocks like Palo Alto and CrowdStrike and Zscaler will outperform in a recession, but that play is not for the faint of heart. In fact, it's feeling like a longer, more drawn out tech lash than many had hoped. Perhaps as much as 12 to 18 months of bouncing around with sellers still in control, is generally the sentiment from Simonton. So in terms of Cyber spending being non-discretionary, we'd say it's less discretionary than other it sectors but the CISO still does not have an open wallet, as we've reported before. We've seen that spending momentum has decelerated in all sectors throughout the year. This is an across the board trend. Now, independent of the stock price, George Kurtz, CEO of CrowdStrike, he's running a marathon, not a sprint. And this company is running at a nice pace despite tough macro headwinds. The company is free cash flow positive and is in the black, or a non-GAAP operating profit basis and yet it's growing ARR at nearly 60%. Frank Slootman uses the term inherent profitability, meaning that the company could drive more profits if it wanted to dial down expenses especially in go to market costs. But that would be a mistake for a company like CrowdStrike, in our opinion. While it has an impressive nearly 20,000 customers, there are hundreds of thousands of customers that CrowdStrike could penetrate. So like Snowflake and Slootman, Kurtz is not taking its foot off the gas. Now, the fundamental strength of CrowdStrike and its secret sauce is its architecture and platform, in our view, so let's take a deeper look. CrowdStrike believes that the unstoppable breach is a myth. Now, CISOs don't agree with that because they assume they're going to get breached, but that's CrowdStrike's point of view, so lofty vision. CrowdStrike's mission is to consolidate the patchwork of solutions by introducing modules that go beyond point products. CrowdStrike has more than 20 modules, I think 22, that span a range of capabilities as shown in this table. Now, there are a few critical aspects of the CrowdStrike architecture that bear mentioning. First is the lightweight agent, that is fundamental. You know, we're used to thinking that agentless is good and agent is bad, but in this case, a powerful but small, slim and easy to install but unobtrusive agent has its advantages because it supports multiple CrowdStrike modules. The second point is CrowdStrike from the beginning has been dogmatic about getting all the telemetry data into the cloud. It sort of shunned doing bespoke on prem so that all the data could be analyzed. So the more agents that CrowdStrike installs around the world, the more data it has access to and the better its intelligence. Few companies have access to more data, perhaps Microsoft given it scale and size is an exception in that endpoint space. CrowdStrike has developed a purpose-built threat graph and analytics platform that allows it to quickly ingest in near real time key telemetry data and detect not only known malware, that's pretty straightforward, pretty much anybody could do that. But using machine intelligence, it can also detect unknown malware and other potentially malicious behavior using indicators of attack, IOC, or IOAs. Humio is shown here as a company that CrowdStrike bought for around 400 million in early 2020, early 2021. It's the company's Splunk killer and will serve as an observability platform. It's really starting to take off, that's a great market for them to go after. CrowdStrike, to try to put it into sort of a summary, uses a three pronged approach. First is it's next generation anti-virus, meaning it's SaaS base. SAS based solution that can do fast lookups to telemetry data and that data lives in the cloud. And this leverages cloud strikes proprietary threat graph. Now, the second is endpoint detection and response. CrowdStrike sends all endpoint activity to the cloud and can process the data in real time. CrowdStrike EDR allows you to search data history and its partners with threat intelligent platforms who push the data into CrowdStrike, the CrowdStrike cloud. This increases CloudStrike's observation space. It also has containment capabilities in EDR to fence off compromised system. Now, the third leg of the stool is CrowdStrike's world class manage hunting approach. Like many firms, CrowdStrike has a crack team of experts that is looking at the data, but CrowdStrike's advantage is the amount of data, that observation space that we just talked about, and near real time capabilities of the architecture thanks to that proprietary database that they've developed. And all this is built in the cloud and so it enables global scale. And of course, agility. Now, let's dig into some of the survey data and take a look at what ETR respondents are saying about the spending momentum for CrowdStrike in context with its peers. Here's a very recent dataset, the October preliminary data from the October dataset in ETR's survey. Eric Bradley shared with us, ETR's head of strategy, and he runs the round tables, he's a frequent "Breaking Analysis" contributor. This is an XY graph with Netcore or spending momentum on the vertical axis and the overlap or pervasiveness in the survey on the horizontal axis. That dotted red line at 40% indicates an elevated level of spending velocity. Anything above that, we consider really impressive. Note the CrowdStrike progression since the pandemic started. The two notable points are one, that CrowdStrike has remained consistently above that 40% mark and two, it has made notable progress to the right. You can see that sort of squiggly line consistently increasing its share with one little anomaly there in the early days of over a two-year period. The other call out here is Microsoft in the upper-right. We circled Microsoft as usual. Microsoft messes up the data because it's such a dominant player and has referenced earlier as a massive scale and very quality telemetry from its endpoints. Unlike AWS, Microsoft is a direct competitor of CrowdStrike's. Nonetheless, the sector remains very strong with lots of players. Cyber is a large and expanding TAM with too many point tools that CrowdStrike is well positioned to consolidate, in our view. Now, here's a more narrow view of that same XY graph. What it does is it takes out Microsoft to kind of normalize the data a bit and it compares a number of firms that specialize in endpoint, along with CrowdStrike such as Tanium which also has a lightweight agent, by the way, and appears to be doing pretty well. SentinelOne did a relatively recent IPO, took off, stock hasn't done as well since, as you saw earlier. Carbon Black which VMware bought for around $2 billion and Cylance which is the Blackberry pivot. Now, we've also for context included Palo Alto and Cisco because they are major players with the big presence in security and they've got solutions that compete with CrowdStrike. But you can see how CrowdStrike looms large with a higher net score than these others. Although Palo Alto is very impressive, as is Cisco, steady. But Palo Alto also, sorry, CrowdStrike also has a very steady posture instead of just looming on that X axis. Let's now take a look at XDR, extended detection and response. XDR is kind of this bit of a buzzword but CrowdStrike seems to be taking the mantle and trying to sort of own the category and define it, in our view. It's a natural evolution of endpoint detection and response, EDR. In a recent ETR Roundtable hosted by our colleague, Eric Bradley, the sentiment among several CIOs is that existing SIEM, security information and event management platforms are inadequate and some see XDR as a replacement for, or at least a strong compliment to SIEM. CISOs want a single view of their data. Hmm, you haven't heard that before. They want help prioritizing potentially high impact breaches and they want to automate the low level stuff because the problem is sometimes too much information becomes information overload and you can't prioritize. So they want to consolidate platforms. They want better co consistency. They have too many dashboards, too many stove pipes. They have difficulty scaling and they have inconsistent telemetry data. As one CISO said, it's a call out here. "If the regulatory requirement isn't there, I absolutely would get rid of my SIEM." So CrowdStrike, we feel, is in a good position to continue to gain, share and disrupt this space. And that's what Dave Nicholson and I will be looking for next week when theCUBE is at Fal.Con, CrowdStrike's user conference. We'll be there for two days at the area in Vegas. In addition to CrowdStrike CEO, we'll hear from government cyber experts. We always hear that at security conferences and the CEO of Mandiant. Google just the other day closed its $5 billion plus acquisition of Mandiant, which is a threat intelligence expert and MSSP. I'm going to hear a lot about MSSPs by the way. CrowdStrike is a growing MSSP base. We think that's a really interesting sector because many companies don't have a SOC. As many as 50% of companies in the United States don't have a security operations center. So they need help, that's where MSPs come in. At the conference, there'll be a real focus on the Falcon platform. And we expect CrowdStrike to educate the audience on its multiple modules and how to take advantage of the capabilities beyond endpoint. And we'll also be watching for the ecosystem conversations. We saw this at reinforced, for example, where CrowdStrike and Okta were presenting together to show how these companies products compliment each other in the marketplace. Sometimes it gets confusing when you hear that CrowdStrike has an identity product. Okta, of course, is the identity specialist. So we'll be helping extract that signal from the noise. Because a generational company must have a strong ecosystem. CrowdStrike is evolving and our belief is that it has some work to do to create a stronger partner flywheel, and we're eager to dig into that next week. So if you're at the event, please do stop by theCUBE, say hello to Dave Nicholson and myself. Okay, we're going to leave it there today. Many thanks to Chip Simonton and Eric Bradley for their input and contributions to today's episode. Thanks to Alex Myerson, who does production, he also manages our podcast, Ken Schiffman as well, in our Boston studios, Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters, and Rob Hof is our editor in chief over at siliconangle.com. He does some wonderful editing and I really appreciate that. Remember, all these episodes are available as podcasts wherever you listen, just search "Breaking Analysis" Podcast. I publish each week on wikibon.com and siliconangle.com and you can email me at david.vellante@siliconangle.com or DM me @DVellante or comment on our LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis". (upbeat music)

Published Date : Sep 17 2022

SUMMARY :

This is "Breaking Analysis" and is in the black, or a

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

Dave NicholsonPERSON

0.99+

CiscoORGANIZATION

0.99+

Chip SimontonPERSON

0.99+

Eric BradleyPERSON

0.99+

Frank SlootmanPERSON

0.99+

Dave VellantePERSON

0.99+

George KurtzPERSON

0.99+

August 30thDATE

0.99+

OctoberDATE

0.99+

Cheryl KnightPERSON

0.99+

Rob HofPERSON

0.99+

FedExORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

VegasLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

5 billionQUANTITY

0.99+

MandiantORGANIZATION

0.99+

Palo AltoORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

28%QUANTITY

0.99+

twoQUANTITY

0.99+

$5 billionQUANTITY

0.99+

two daysQUANTITY

0.99+

GoogleORGANIZATION

0.99+

12QUANTITY

0.99+

FirstQUANTITY

0.99+

Palo AltoLOCATION

0.99+

40%QUANTITY

0.99+

50%QUANTITY

0.99+

United StatesLOCATION

0.99+

second pointQUANTITY

0.99+

OktaORGANIZATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

TaniumORGANIZATION

0.99+

oneQUANTITY

0.99+

more than 2 billionQUANTITY

0.99+

early 2021DATE

0.99+

AWSORGANIZATION

0.99+

BlackberryORGANIZATION

0.99+

next weekDATE

0.99+

more than 20 modulesQUANTITY

0.99+

nearly 20,000 customersQUANTITY

0.99+

18 monthsQUANTITY

0.99+

around $2 billionQUANTITY

0.99+

siliconangle.comOTHER

0.99+

Chip SimontonPERSON

0.99+

VMwareORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

this yearDATE

0.98+

early 2020DATE

0.98+

each weekQUANTITY

0.98+

nearly 60%QUANTITY

0.98+

SentinelOneORGANIZATION

0.98+

over 10 yearsQUANTITY

0.98+

BostonLOCATION

0.98+

todayDATE

0.98+

CrowdStrikeTITLE

0.98+

HumioORGANIZATION

0.97+

ETRORGANIZATION

0.97+

secondQUANTITY

0.97+

Lie 2, An Open Source Based Platform Cannot Give You Performance and Control | Starburst


 

>>We're back with Jess Borgman of Starburst and Richard Jarvis of EVAs health. Okay. We're gonna get into lie. Number two, and that is this an open source based platform cannot give you the performance and control that you can get with a proprietary system. Is that a lie? Justin, the enterprise data warehouse has been pretty dominant and has evolved and matured. Its stack has mature over the years. Why is it not the default platform for data? >>Yeah, well, I think that's become a lie over time. So I, I think, you know, if we go back 10 or 12 years ago with the advent of the first data lake really around Hudu, that probably was true that you couldn't get the performance that you needed to run fast, interactive, SQL queries in a data lake. Now a lot's changed in 10 or 12 years. I remember in the very early days, people would say, you'll, you'll never get performance because you need to be column. You need to store data in a column format. And then, you know, column formats were introduced to, to data lake. You have Parque ORC file in aro that were created to ultimately deliver performance out of that. So, okay. We got, you know, largely over the performance hurdle, you know, more recently people will say, well, you don't have the ability to do updates and deletes like a traditional data warehouse. >>And now we've got the creation of new data formats, again, like iceberg and Delta and hoote that do allow for updates and delete. So I think the data lake has continued to mature. And I remember a quote from, you know, Kurt Monash many years ago where he said, you know, it takes six or seven years to build a functional database. I think that's that's right. And now we've had almost a decade go by. So, you know, these technologies have matured to really deliver very, very close to the same level performance and functionality of, of cloud data warehouses. So I think the, the reality is that's become a lie and now we have large giant hyperscale internet companies that, you know, don't have the traditional data warehouse at all. They do all of their analytics in a data lake. So I think we've, we've proven that it's very much possible today. >>Thank you for that. And so Richard, talk about your perspective as a practitioner in terms of what open brings you versus, I mean, the clothes is it's open as a moving target. I remember Unix used to be open systems and so it's, it is an evolving, you know, spectrum, but, but from your perspective, what does open give you that you can't get from a proprietary system where you are fearful of in a proprietary system? >>I, I suppose for me open buys us the ability to be unsure about the future, because one thing that's always true about technology is it evolves in a, a direction, slightly different to what people expect and what you don't want to end up done is backed itself into a corner that then prevents it from innovating. So if you have chosen the technology and you've stored trillions of records in that technology and suddenly a new way of processing or machine learning comes out, you wanna be able to take advantage your competitive edge might depend upon it. And so I suppose for us, we acknowledge that we don't have perfect vision of what the future might be. And so by backing open storage technologies, we can apply a number of different technologies to the processing of that data. And that gives us the ability to remain relevant, innovate on our data storage. And we have bought our way out of the, any performance concerns because we can use cloud scale infrastructure to scale up and scale down as we need. And so we don't have the concerns that we don't have enough hardware today to process what we want to do, want to achieve. We can just scale up when we need it and scale back down. So open source has really allowed us to maintain the being at the cutting edge. >>So Jess, let me play devil's advocate here a little bit, and I've talked to JAK about this and you know, obviously her vision is there's an open source that, that data mesh is open source, an open source tooling, and it's not a proprietary, you know, you're not gonna buy a data mesh. You're gonna build it with, with open source toolings and, and vendors like you are gonna support it, but come back to sort of today, you can get to market with a proprietary solution faster. I'm gonna make that statement. You tell me if it's a lie and then you can say, okay, we support Apache iceberg. We're gonna support open source tooling, take a company like VMware, not really in the data business, but how, the way they embraced Kubernetes and, and you know, every new open source thing that comes along, they say, we do that too. Why can't proprietary systems do that and be as effective? >>Yeah, well I think at least with the, within the data landscape saying that you can access open data formats like iceberg or, or others is, is a bit dis disingenuous because really what you're selling to your customer is a certain degree of performance, a certain SLA, and you know, those cloud data warehouses that can reach beyond their own proprietary storage drop all the performance that they were able to provide. So it is, it reminds me kind of, of, again, going back 10 or 12 years ago when everybody had a connector to hit and that they thought that was the solution, right? But the reality was, you know, a connector was not the same as running workloads in hit back then. And I think similarly, you know, being able to connect to an external table that lives in an open data format, you know, you're, you're not going to give it the performance that your customers are accustomed to. And at the end of the day, they're always going to be predisposed. They're always going to be incentivized to get that data ingested into the data warehouse, cuz that's where they have control. And you know, the bottom line is the database industry has really been built around vendor lockin. I mean, from the start, how, how many people love Oracle today, but our customers, nonetheless, I think, you know, lockin is, is, is part of this industry. And I think that's really what we're trying to change with open data formats. >>Well, it's interesting remind of when I, you know, I see the, the gas price, the TSR gas price I, I drive up and then I say, oh, that's the cash price credit card. I gotta pay 20 cents more, but okay. But so the, the argument then, so let me, let me come back to you, Justin. So what's wrong with saying, Hey, we support open data formats, but yeah, you're gonna get better performance if you, if you, you keep it into our closed system, are you saying that long term that's gonna come back and bite you cuz you're gonna end up, you mentioned Oracle, you mentioned Teradata. Yeah. That's by, by implication, you're saying that's where snowflake customers are headed. >>Yeah, absolutely. I think this is a movie that, you know, we've all seen before. At least those of us who've been in the industry long enough to, to see this movie play over a couple times. So I do think that's the future. And I think, you know, I loved what Richard said. I actually wrote it down. Cause I thought it was an amazing quote. He said, it buys us the ability to be unsure of the future. That that pretty much says it all the, the future is unknowable and the reality is using open data formats. You remain interoperable with any technology you want to utilize. If you want to use spark to train a machine learning model and you wanna use Starbust to query via sequel, that's totally cool. They can both work off the same exact, you know, data, data sets by contrast, if you're, you know, focused on a proprietary model, then you're kind of locked in again to that model. I think the same applies to data, sharing to data products, to a wide variety of, of aspects of the data landscape that a proprietary approach kind of closes you and, and locks you in. >>So I, I would say this Richard, I'd love to get your thoughts on it. Cause I talked to a lot of Oracle customers, not as many te data customers there, but, but a lot of Oracle customers and they, you know, they'll admit yeah, you know, the Jammin us on price and the license cost, but we do get value out of it. And so my question to you, Richard, is, is do the, let's call it data warehouse systems or the proprietary systems. Are they gonna deliver a greater ROI sooner? And is that in allure of, of that customers, you know, are attracted to, or can open platforms deliver as fast an ROI? >>I think the answer to that is it can depend a bit. It depends on your business's skillset. So we are lucky that we have a number of proprietary teams that work in databases that provide our operational data capability. And we have teams of analytics and big data experts who can work with open data sets and open data formats. And so for those different teams, they can get to an ROI more quickly with different technologies for the business though, we can't do better for our operational data stores than proprietary databases. Today we can back off very tight SLAs to them. We can demonstrate reliability from millions of hours of those databases being run at enterprise scale, but for an analytics workload where increasing our business is growing in that direction, we can't do better than open data formats with cloud-based data mesh type technologies. And so it's not a simple answer. That one will always be the right answer for our business. We definitely have times when proprietary databases provide a capability that we couldn't easily represent or replicate with open technologies. >>Yeah. Richard, stay with you. You mentioned, you know, you know, some things before that, that strike me, you know, the data brick snowflake, you know, thing is always a lot of fun for analysts like me. You've got data bricks coming at it. Richard, you mentioned you have a lot of rockstar, data engineers, data bricks coming at it from a data engineering heritage. You get snowflake coming at it from an analytics heritage. Those two worlds are, are colliding people like PJI Mohan said, you know what? I think it's actually harder to play in the data engineering. So IE, it's easier to for data engineering world to go into the analytics world versus the reverse, but thinking about up and coming engineers and developers preparing for this future of data engineering and data analytics, how, how should they be thinking about the future? What, what's your advice to those young people? >>So I think I'd probably fall back on general programming skill sets. So the advice that I saw years ago was if you have open source technologies, the pythons and Javas on your CV, you command a 20% pay, hike over people who can only do proprietary programming languages. And I think that's true of data technologies as well. And from a business point of view, that makes sense. I'd rather spend the money that I save on proprietary licenses on better engineers, because they can provide more value to the business that can innovate us beyond our competitors. So I think I would my advice to people who are starting here or trying to build teams to capitalize on data assets is begin with open license, free capabilities because they're very cheap to experiment with. And they generate a lot of interest from people who want to join you as a business. And you can make them very successful early, early doors with, with your analytics journey. >>It's interesting. Again, analysts like myself, we do a lot of TCO work and have over the last 20 plus years and in the world of Oracle, you know, normally it's the staff, that's the biggest nut in total cost of ownership, not an Oracle. It's the it's the license cost is by far the biggest component in the, in the blame pie. All right, Justin, help us close out this segment. We've been talking about this sort of data mesh open, closed snowflake data bricks. Where does Starburst sort of as this engine for the data lake data lake house, the data warehouse, it, it fit in this, in this world. >>Yeah. So our view on how the future ultimately unfolds is we think that data lakes will be a natural center of gravity for a lot of the reasons that we described open data formats, lowest total cost of ownership, because you get to choose the cheapest storage available to you. Maybe that's S3 or Azure data lake storage or Google cloud storage, or maybe it's on-prem object storage that you bought at a, at a really good price. So ultimately storing a lot of data in a data lake makes a lot of sense, but I think what makes our perspective unique is we still don't think you're gonna get everything there either. We think that basically centralization of all your data assets is just an impossible endeavor. And so you wanna be able to access data that lives outside of the lake as well. So we kind of think of the lake as maybe the biggest place by volume in terms of how much data you have, but to, to have comprehensive analytics and to truly understand your business and understanding holistically, you need to be able to go access other data sources as well. And so that's the role that we wanna play is to be a single point of access for our customers, provide the right level of fine grained access controls so that the right people have access to the right data and ultimately make it easy to discover and consume via, you know, the creation of data products as well. >>Great. Okay. Thanks guys. Right after this quick break, we're gonna be back to debate whether the cloud data model that we see emerging and the so-called modern data stack is really modern or is it the same wine new bottle when it comes to data architectures, you're watching the cube, the leader in enterprise and emerging tech coverage.

Published Date : Aug 22 2022

SUMMARY :

give you the performance and control that you can get with a proprietary We got, you know, largely over the performance hurdle, you know, more recently people will say, And I remember a quote from, you know, Kurt Monash many years ago where he said, you know, it is an evolving, you know, spectrum, but, but from your perspective, in a, a direction, slightly different to what people expect and what you don't want to end up So Jess, let me play devil's advocate here a little bit, and I've talked to JAK about this and you know, And I think similarly, you know, being able to connect to an external table that lives in an open data format, Well, it's interesting remind of when I, you know, I see the, the gas price, the TSR gas price And I think, you know, I loved what Richard said. you know, the Jammin us on price and the license cost, but we do get value out And so for those different teams, they can get to an you know, the data brick snowflake, you know, thing is always a lot of fun for analysts like me. So the advice that I saw years ago was if you have open source technologies, years and in the world of Oracle, you know, normally it's the staff, to discover and consume via, you know, the creation of data products as well. data model that we see emerging and the so-called modern data stack is

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jess BorgmanPERSON

0.99+

RichardPERSON

0.99+

20 centsQUANTITY

0.99+

sixQUANTITY

0.99+

JustinPERSON

0.99+

Richard JarvisPERSON

0.99+

OracleORGANIZATION

0.99+

Kurt MonashPERSON

0.99+

20%QUANTITY

0.99+

JessPERSON

0.99+

pythonsTITLE

0.99+

seven yearsQUANTITY

0.99+

TodayDATE

0.99+

JavasTITLE

0.99+

TeradataORGANIZATION

0.99+

VMwareORGANIZATION

0.98+

millionsQUANTITY

0.98+

EVAsORGANIZATION

0.98+

JAKPERSON

0.98+

StarburstORGANIZATION

0.98+

bothQUANTITY

0.97+

10DATE

0.97+

12 years agoDATE

0.97+

StarbustTITLE

0.96+

todayDATE

0.95+

Apache icebergORGANIZATION

0.94+

GoogleORGANIZATION

0.93+

12 yearsQUANTITY

0.92+

single pointQUANTITY

0.92+

two worldsQUANTITY

0.92+

10QUANTITY

0.91+

HuduLOCATION

0.91+

UnixTITLE

0.9+

one thingQUANTITY

0.87+

trillions of recordsQUANTITY

0.83+

first data lakeQUANTITY

0.82+

StarburstTITLE

0.8+

PJIORGANIZATION

0.79+

years agoDATE

0.76+

IETITLE

0.75+

Lie 2TITLE

0.72+

many years agoDATE

0.72+

over a couple timesQUANTITY

0.7+

TCOORGANIZATION

0.7+

ParqueORGANIZATION

0.67+

Number twoQUANTITY

0.64+

KubernetesORGANIZATION

0.59+

a decadeQUANTITY

0.58+

plus yearsDATE

0.57+

AzureTITLE

0.57+

S3TITLE

0.55+

DeltaTITLE

0.54+

20QUANTITY

0.49+

lastDATE

0.48+

MohanPERSON

0.44+

ORCORGANIZATION

0.27+

Bryan Inman, Armis | Managing Risk With The Armis Platform REV2


 

(upbeat music) >> Hello everyone, welcome back to the manager risk across the extended attack surface with Armis. I'm John Furrier, your host of theCUBE. Got the demo. Got here, Bryan Inman sales engineer at Armis. Bryan, thanks for coming on. We're looking forward to the demo. How you doing? >> I'm doing well, John, thanks for having me. >> We heard from Nadir describing Armis' platform, lot of intelligence. It's like a search engine meets data at scale, intelligent platform around laying out the asset map, if you will, the new vulnerability module among other things that really solves CISCO's problems. A lot of great customer testimonials and we got the demo here that you're going to give us. What's the demo about? What are we going to see? >> Well, John, thanks. Great question. And truthfully, I think as Nadir has pointed out what Armis as a baseline is giving you is great visibility into every asset that's communicating within your environment. And from there, what we've done is we've layered on known vulnerabilities associated with not just the device, but also what else is on the device. Is there certain applications running on that device, the versions of those applications, and what are the vulnerabilities known with that? So that's really gives you great visibility in terms of the devices that folks aren't necessarily have visibility into now, unmanaged devices, IoT devices, OT, and critical infrastructure, medical devices things that you're not necessarily able to actively scan or put an agent on. So not only is Armis telling you about these devices but we're also layering on those vulnerabilities all passively and in real time. >> A lot of great feedback we've heard and I've talked to some of your customers. Rhe agentless is a huge deal. The discoveries are awesome. You can see everything and just getting real time information. It's really, really cool. So I'm looking forward to the demo for our guests. Take us on that tour. Let's go with the demo for the guests today. >> All right. Sounds good. So what we're looking at here is within the Armis console is just a clean representation of the passive reporting of what Armis has discovered. So we see a lot of different types of devices from your virtual machines and personal computers, things that are relatively easy to manage. But working our way down, you're able to see a lot of different types of devices that are not necessarily easy to get visibility into, things like your up systems, IT cameras, dash cams, et cetera, lighting systems. And today's day and age where everything is moving to that smart feature, it's great to have that visibility into what's communicating on my network and getting that, being able to layer on the risk factors associated with it as well as the vulnerabilities. So let's pivot over to our vulnerabilities tab and talk about the the AVM portion, the asset vulnerability management. So what we're looking at is the dashboard where we're reporting another clean representation with customizable dashlets that gives you visuals and reporting and things like new vulnerabilities as they come in. What are the most critical vulnerabilities, the newest as they roll in the vulnerabilities by type? We have hardware. We have application. We have operating systems. As we scroll down, we can see things to break it down by vulnerabilities, by the operating system, Windows, Linux, et cetera. We can create dashlets that show you views of the number of devices that are impacted by these CVEs. And scrolling down, we can see how long have these vulnerabilities been sitting within my environment? So what are the oldest vulnerabilities we have here? And then also of course, vulnerabilities by applications. So things like Google Chrome, Microsoft Office. So we're able to give a good representation of the amount of vulnerabilities as they're associated to the hardware and applications as well. So we're going to dig in and take a a deeper look at one of these vulnerabilities here. So I'm excited to talk today about of where Armis AVM is, but also where it's going as well. So we're not just reporting on things like the CVSS score from NIST NVD. We're also able to report on things like the exploitability of that. How actively is this CVE being exploited in the wild? We're reporting EPSS scores. For example, we're able to take open source information as well as a lot of our partnerships that we have with other vendors that are giving us a lot of great value of known vulnerabilities associated with the applications and with hardware, et cetera. But where we're going with this is in very near future releases, we're going to be able to take an algorithm approach of, what are the most critical CVSS that we see? How exploitable are those? What are common threat actors doing with these CVEs? Have they weaponized these CVEs? Are they actively using those weaponized tools to exploit these within other folks' environments? And who's reporting on these? So we're going to take all of these and then really add that Armis flavor of we already know what that device is and we can explain and so can the users of it, the business criticality of that device. So we're able to pivot over to the matches as we see the CVEs. We're able to very cleanly view, what exactly are the devices that the CVE resides on. And as you can see, we're giving you more than just an IP address or a lot more context and we're able to click in and dive into what exactly are these devices. And more importantly, how critical are these devices to my environment? If one of these devices were to go down if it were to be a server, whatever it may be, I would want to focus on those particular devices and ensuring that that CVE, especially if it's an exploitable CVE were to be addressed earlier than say the others and really be able to manage and prioritize these. Another great feature about it is, for example, we're looking at a particular CVE in terms of its patch and build number from Windows 10. So the auto result feature that we have, for example, we've passively detected what this particular personal computer is running Windows 10 and the build and revision numbers on it. And then once Armis passively discovers an update to that firmware and patch level, we can automatically resolve that, giving you a confidence that that has been addressed from that particular device. We're also able to customize and look through and potentially select a few of these, say, these particular devices reside on your guest network or an employee wifi network where we don't necessarily, I don't want to say care, but we don't necessarily value that as much as something internally that holds significantly, more business criticality. So we can select some of these and potentially ignore or resolve for determining reasons as you see here. Be able to really truly manage and prioritize these CVEs. As I scroll up, I can pivot over to the remediation tab and open up each one of these. So what this is doing is essentially Armis says, through our knowledge base been able to work with the vendors and pull down the patches associated with these. And within the remediation portion, we're able to view, for example, if we were to pull down the patch from this particular vendor and apply it to these 60 devices that you see here, right now we're able to view which patches are going to gimme the most impact as I prioritize these and take care of these affected devices. And lastly, as I pivot back over. Again, where we're at now is we're able to allow the users to customize the organizational priority of this particular CVE to where in terms of, this has given us a high CVSS score but maybe for whatever reasons it may be, maybe this CVE in terms of this particular logical segment of my network, I'm going to give it a low priority for whatever the use case may be. We have compensating controls set in place that render this CVE not impactful to this particular segment of my environment. So we're able to add that organizational priority to that CVE and where we're going as you can see that popped up here but where we're going is we're going to start to be able to apply the organizational priority in terms of the actual device level. So what we'll see is we'll see a column added to here to where we'll see the the business impact of that device based on the importance of that particular segment of your environment or the device type, be it critical networking device or maybe a critical infrastructure device, PLCs, controllers, et cetera, but really giving you that passive reporting on the CVEs in terms of what the device is within your network. And then finally, we do integrate with your vulnerability management and scanners as well. So if you have a scanner actively scanning these, but potentially they're missing segments of your net network, or they're not able to actively scan certain devices on your network, that's the power of Armis being able to come back in and give you that visibility of not only what those devices are for visibility into them, but also what vulnerabilities are associated with those passive devices that aren't being scanned by your network today. So with that, that concludes my demo. So I'll kick it back over to you, John. >> Awesome. Great walk through there. Take me through what you think the most important part of that. Is it the discovery piece? Is it the interaction? What's your favorite? >> Honestly, I think my favorite part about that is in terms of being able to have the visibility into the devices that a lot of folks don't see currently. So those IoT devices, those OT devices, things that you're not able to run a scan on or put an agent on. Armis is not only giving you visibility into them, but also layering in, as I said before, those vulnerabilities on top of that, that's just visibility that a lot of folks today don't have. So Armis does a great job of giving you visibility and vulnerabilities and risks associated with those devices. >> So I have to ask you, when you give this demo to customers and prospects, what's the reaction? Falling out of their chair moment? Are they more skeptical? It's almost too good to be true and end to end vulnerability management is a tough nut to crack in terms of solution. >> Honestly, a lot of clients that we've had, especially within the OT and the medical side, they're blown away because at the end of the day when we can give them that visibility, as I've said, Hey, I didn't even know that those devices resided in that portion, but not only we showing them what they are and where they are and enrichment on risk factors, et cetera, but then we show them, Hey, we've worked with that vendor, whatever it may be and Rockwell, et cetera, and we know that there's vulnerabilities associated with those devices. So they just seem to be blown away by the fact that we can show them so much about those devices from behind one single console. >> It reminds me of the old days. I'm going to date myself here. Remember the old Google Maps mashup days. Customers talk about this as the Google Maps for their assets. And when you have the Google Maps and you have the Ubers out there, you can look at the trails, you can look at what's happening inside the enterprise. So there's got to be a lot of interest in once you get the assets, what's going on those networks or those roads, if you will, 'cause you got in packet movement. You got things happening. You got upgrades. You got changing devices. It's always on kind of living thing. >> Absolutely. Yeah, it's what's on my network. And more importantly at times, what's on those devices? What are the risks associated with the the applications running on those? How are those devices communicating? And then as we've seen here, what are the vulnerabilities associated with those and how can I take action with them? >> Real quick, put a plug in for where I can find the demo. Is it online? Is it on YouTube? On the website? Where does someone see this demo? >> Yeah, the Armis website has a lot of demo content loaded. Get you in touch with folks like engineers like myself to provide demos whenever needed. >> All right, Bryan, thanks for coming on this show. Appreciate, Sales Engineer at Armis, Bryan Inman. Given the demo God award out to him. Good job. Thanks for the demo. >> Thanks, thanks for having me. >> Okay. In a moment, we're going to have my closing thoughts on this event and really the impact to the business operations side, in a moment. I'm John Furrier of theCUBE. Thanks for watching. (upbeat music)

Published Date : Jun 21 2022

SUMMARY :

We're looking forward to the demo. thanks for having me. and we got the demo here in terms of the devices and I've talked to some of your customers. So the auto result feature that we have, Is it the discovery piece? to have the visibility So I have to ask you, So they just seem to be blown away So there's got to be a lot of interest What are the risks associated On the website? to provide demos whenever needed. Given the demo God award out to him. to the business operations

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

JohnPERSON

0.99+

BryanPERSON

0.99+

CISCOORGANIZATION

0.99+

Bryan InmanPERSON

0.99+

60 devicesQUANTITY

0.99+

RockwellORGANIZATION

0.99+

Windows 10TITLE

0.99+

ArmisORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

Google MapsTITLE

0.99+

todayDATE

0.98+

oneQUANTITY

0.98+

LinuxTITLE

0.96+

one single consoleQUANTITY

0.96+

NIST NVDORGANIZATION

0.95+

WindowsTITLE

0.95+

each oneQUANTITY

0.93+

Armis'ORGANIZATION

0.91+

UbersORGANIZATION

0.9+

NadirORGANIZATION

0.85+

MicrosoftORGANIZATION

0.81+

Google ChromeTITLE

0.79+

theCUBEORGANIZATION

0.75+

NadirPERSON

0.71+

ArmisPERSON

0.56+

REV2TITLE

0.53+

OfficeTITLE

0.51+

GodTITLE

0.47+

Armis PlatformORGANIZATION

0.4+

Tim Everson, Kalahari Resorts and Conventions | Manage Risk with the Armis Platform


 

>> Okay, welcome back to the portion of the program for customer lightning talks, where we chat with Armis' customers for a rapid fire five minute session on their Cisco perspectives and insights into cybersecurity. First up is Tim Everson, CISO of Kalahari resorts and conventions. Let's get it going. Hi, Tim. Welcome to theCUBE and Armis program, managing risk across your extended surface area. >> Thanks for having me appreciate it. >> So let's get going. So unified visibility across the extended asset serves as key. You can't secure what you can't see. Tell me about what you're able to centralize, your views on network assets and what is Armis doing from an impact standpoint that's had on your business? >> Sure. So traditionally basically you have all your various management platforms, your Cisco platforms, your Sims, your wireless platforms, all the different pieces and you've got a list of spare data out there and you've got to chase all of this data through all these different tools. Armis is fantastic and was really point blank dropping in place for us as far as getting access to all of that data all in one place and giving us visibility to everything. Basically opened the doors letting us see our customer wireless traffic, our internal traffic, our PCI traffic because we deal with credit cards, HIPAA, compliance, all this traffic, all these different places, all into one. >> All right, next up, vulnerability management is a big topic, across all assets, not just IT devices. The gaps are there in the current vulnerability management programs. How has Armis vulnerability management made things better for your business and what can you see now that you couldn't see before? >> So Armis gives me better visibility of the network side of these vulnerabilities. You have your Nessus vulnerability scanners, the things that look at machines, look at configurations and hard facts. Nessus gives you all those. But when you turn to Armis, Armis looks at the network perspective, takes all that traffic that it's seeing on the network and gives you the network side of these vulnerabilities. So you can see if something's trying to talk out to a specific port or to a specific host on the internet and Armis consolidates all that and gives you trusted sources of information to validate where those are coming from. >> When you take into account all the criticality of the different kinds of assets involved in a business operation and they're becoming more wider, especially with edge in other areas, how has the security workload changed? >> The security workload has increased dramatically, especially in hospitality. In our case, not only do we have hotel rooms and visitors and our guests, we also have a convention center that we deal with. We have water parks and fun things for people to do. Families and businesses alike. And so when you add all those things up and you add the wireless and you add the network and the audio video and all these different pieces that come into play with all of those things in hospitality and you add our convention centers on top of it, the footprint's just expanded enormously in the past few years. >> When you have a digital transformation in a use case like yours, it's very diverse. You need a robust network, you need a robust environment to implement SaaS solutions. No ages to deploy, no updates needed. You got to be in line with that to execute and scale. How easy was Armis to implement ease of use of simplicity, the plug and play? In other words, how quickly do you achieve this time to value? >> Oh goodness. We did a proof of concept about three months ago in one of our resort locations, we dropped in an Armis appliance and literally within the first couple hours of the appliance being on the network, we had data on 30 to 40,000 devices that were touching our network. Very quick and easy, very drop and plug and play and moving from the POC to production, same deal. We, we dropped in these appliances in site. Now we're seeing over 180,000 devices touching our networks within a given week. >> Armis has this global asset knowledge base, it's crowdsourced an a asset intelligent engine, it's a game changer. It tracks managed, unmanaged IOT devices. Were you shocked when you discovered how many assets they were able to discover and what impact did that have for you? >> Oh, absolutely. Not only do we have the devices that we have, but we have guests that bring things on site all the time, Roku TVs and players and Amazon Fire Sticks and all these different things that are touching our network and seeing those in real time and seeing how much traffic they're using we can see utilization, we can see exactly what's being brought on, we can see vehicles in our parking lot that have access points turned on. I mean, it's just amazing how much data this opened our eyes to that you know it's there but you don't ever see it. >> It's bring your own equipment to the resort just so you can watch all your Netflix, HDMI cable, everyone's doing it now. I mean, this is the new user behavior. Great insight. Anything more you'd want to say about Armis for the folks watching? >> I would say the key is they're very easy to work with. The team at Armis has worked very closely with me to get the integrations that we've put in place with our networking equipment, with our wireless, with different pieces of things and they're working directly with me to help integrate some other things that we've asked them to do that aren't there already. Their team is very open. They listen, they take everything that we have to say as a customer to heart and they really put a lot of effort into making it happen. >> All right, Tim. Well, thanks for your time. I'm John Furrier with theCUBE, the leader in enterprise tech coverage. Up next in this lightning talk session is Brian Gilligan, manager, security and Operations at Brookfield Properties. Thanks for watching.

Published Date : Jun 21 2022

SUMMARY :

the portion of the program You can't secure what you can't see. you have all your various and what can you see now and gives you the network and you add the network that to execute and scale. the POC to production, same deal. when you discovered how that you know it's there about Armis for the folks watching? everything that we have to say and Operations at Brookfield Properties.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilliganPERSON

0.99+

TimPERSON

0.99+

Tim EversonPERSON

0.99+

30QUANTITY

0.99+

AmazonORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

John FurrierPERSON

0.99+

NessusORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

five minuteQUANTITY

0.99+

FirstQUANTITY

0.99+

Fire SticksCOMMERCIAL_ITEM

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.98+

over 180,000 devicesQUANTITY

0.98+

first couple hoursQUANTITY

0.97+

40,000 devicesQUANTITY

0.97+

Brookfield PropertiesORGANIZATION

0.95+

oneQUANTITY

0.92+

NetflixORGANIZATION

0.92+

one placeQUANTITY

0.9+

three months agoDATE

0.86+

RokuORGANIZATION

0.83+

KalahariORGANIZATION

0.78+

theCUBEORGANIZATION

0.77+

CISOPERSON

0.67+

past few yearsDATE

0.6+

HIPAATITLE

0.56+

aboutDATE

0.53+

Armis'ORGANIZATION

0.52+

rsORGANIZATION

0.4+

Brian Galligan, Brookfield Properties | Manage Risk with the Armis Platform


 

>> Okay, up next in the Lightning Talk Session is Brian Galligan; Mgr, Security and Operations at Brookfield Properties. Brian, great to see you. Thanks for coming on. >> Thanks for having me, John. >> So unified visibility across extended asset surface area is key these days. You can't secure what you can't see. So tell me more about how you were able to centralize your view of network assets with Armis and what impact that had on your business. >> Yeah, that's been a really key component of ours where we've actually owned multiple companies with them and are always acquiring companies from time to time. So it's always a question. What is actually out there and what do we need to be worried about. So from an inventory perspective it's definitely something that we've been looking into. Armis was a great partner in being able to get us the visibility into a lot of the IoT that we have out in the environment. And then also trying to find what we have and what's actually installed on those devices. What's running, who's talking to who. So that's definitely been a key component with our partnership with Armis. >> You know, we interview a lot of practitioners and companies and one things we found is vulnerability Management programs. There's a lot of gaps. You know, vulnerability management comes across more sometimes just IT devices, but not all assets. How has Armis Vulnerability Management made things better for your business? And what can you see now that you couldn't see before? >> Yeah, again, because we own multiple companies and they actually use different tools for vulnerability management. It's been a challenge to be able to compare apples to apples on when we have vulnerability. When we have risk out there, how do you put a single number to it? How do you prioritize different initiatives across those sectors? And being able to use Armis and have that one score, have that one visibility and also that one platform that you can query across all of those different companies, has been huge because we just haven't had the ability to say are we vulnerable to X, Y and Z across the board in these different companies? >> You know, it's interesting when you have a lot of different assets and companies, as you mentioned. It kind of increases the complexity and yeah we love the enterprise. You solve complexity by more complexity but that's not the playbook anymore. We want simplicity. We want to have a better solution. So when you take into account, the criticality of these businesses as you're integrating in, in real time and the assets within those business operations you got to keep focused on the right solutions. What has Armis done for you that's been correct and right for you guys? >> Yeah, so being able to see the different like be able to actually drill down into the nitty gritty on what devices are connecting to what. Being able to enforce policies that way, I think has been a huge win that we've been able to see from Armis. It's one of those things where we were able to see north-south traffic. No problem with our typical SIM tools, firewall tools and different logging sources but we haven't been able to see anything east-west and that's where we're going to be most vulnerable. That's where we've been actually found. We found some gaps in our coverage from a pen test perspective where we've found that where we don't have that visibility. Armis has allowed us to get into that communication to better fine tune the rules that we have across devices across sectors, across the data center to properties. Properties of the data center and then also to the cloud. >> Yeah, visibility into the assets is huge. But as you're in operations you got to operationalize these tools. I mean, some people sound like they've got a great sales pitch and all sounds like, "Wait a minute, I got to re-configure my entire operations." At the end of the day, you want to have an easy to use, but effective capability. So you're not taxed either personnel or operations. How easy has it been with Armis to implement from an ease of use, simplicity, plug and play? In other words, how quickly did you get to the time to value? Can you share your thoughts? >> This honestly is the biggest value that we've seen in Armis. I think a, a big kudos goes to the professional services group for getting us stood up being able to explain the tool, be able to dig into it and then get us to that time to value. Honestly, we've only scratched the surface on what Armis can give us which is great because they've given us so much already. So definitely taking that model of let's crawl, walk, run with what we're able to do. But the professional services team has given us so much assistance in getting from one collector to now many collectors. And we're in that deployment phase where we're able to gather more data and find those anomalies that are out there. I again, big props to the, the professional services team. >> Yeah, you know one of we'd add an old expression when you know when the whole democratization happened on the web here comes all the people, you know social media and whatnot now with IoT here comes all the devices. Here comes all the things- >> Yeah. >> Things >> More things are being attached to the network. So Armis has this global asset knowledge base that crowd-sources the asset intelligence. How has that been a game changer for you? And were you shocked when you discovered how many assets they were able to discover and what impact did that have for you? >> We have a large wifi footprint for guests, vendors, contractors that are working on site along with our corporate side, which has a lot of devices on it as well. And being able to see what devices are using what services on there and then be able to fingerprint them easily has been huge. I would say one of the best stories that I can tell is actually with a pen test that we ran recently. We were able to determine what the pen test device was and how it was acting anomalous and then fingerprint that device within five minutes opposed to getting on the phone with probably four or five different groups to figure out what is this device? It's not one of our normal devices. It's not one of our normal builds or anything. We were able to find that device within probably three to five minutes with Armis and the fingerprinting capability. >> Yeah, nothing's going to get by you with these port scans or any kind of activity, so to speak, jumping on the wifi. Great stuff. Anything else you'd like to share about Armis while I got you here? >> Yeah, I would say that something recently, we actually have an open position on our team currently. And one of the most exciting things is being able to share our journey that we've had with Armis over the last year, year and a half, and their eyes light up when they hear the capabilities of what Armis can do, what Armis can offer. And you see a little bit of jealousy of, you know, "Hey I really wish my current organization had that." And it's one of those selling tools that you're able to give to security engineers, security analysts saying, "Here's what you're going to have on the team to be able to do your job, right." So that you don't have to worry about necessarily the normal mundane things. You get to actually go do the cool hunting stuff, which Armis allows you to do. >> Well. Brian, thanks for the time here on this Lightning Talk, appreciate your insight. I'm John Furrier with theCUBE the leader in enterprise tech coverage. Up next in the Lightning Talk Session is Alex Schuchman. He's the CISO of Colgate-Palmolive Thanks for watching.

Published Date : Jun 21 2022

SUMMARY :

Brian, great to see you. You can't secure what you can't see. into a lot of the IoT that we And what can you see now had the ability to say and the assets within across the data center to properties. to the time to value? being able to explain the tool, on the web here comes all the people, that crowd-sources the asset intelligence. and then be able to fingerprint Yeah, nothing's going to get have on the team to be able He's the CISO of Colgate-Palmolive

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex SchuchmanPERSON

0.99+

BrianPERSON

0.99+

Brian GalliganPERSON

0.99+

John FurrierPERSON

0.99+

fourQUANTITY

0.99+

JohnPERSON

0.99+

ArmisORGANIZATION

0.99+

threeQUANTITY

0.99+

five minutesQUANTITY

0.99+

ColgateORGANIZATION

0.99+

one scoreQUANTITY

0.98+

oneQUANTITY

0.98+

five different groupsQUANTITY

0.97+

last yearDATE

0.97+

single numberQUANTITY

0.97+

applesORGANIZATION

0.96+

one collectorQUANTITY

0.96+

one platformQUANTITY

0.95+

Brookfield PropertiesORGANIZATION

0.93+

one thingsQUANTITY

0.9+

ArmiPERSON

0.81+

theCUBEORGANIZATION

0.77+

Lightning TalkEVENT

0.77+

and a halfDATE

0.66+

minuteQUANTITY

0.65+

yearQUANTITY

0.62+

Lightning TalkTITLE

0.6+

thoseQUANTITY

0.5+

PalmoliveORGANIZATION

0.38+

Nadir Izrael, Armis | Manage Risk with the Armis Platform


 

(upbeat music) >> Today's organizations are overwhelmed by the number of different assets connected to their networks, which now include not only IT devices and assets, but also a lot of unmanaged assets, like cloud, IoT, building management systems, industrial control systems, medical devices, and more. That's not just it, there's more. We're seeing massive volume of threats, and a surge of severe vulnerabilities that put these assets at risk. This is happening every day. And many, including me, think it's only going to get worse. The scale of the problem will accelerate. Security and IT teams are struggling to manage all these vulnerabilities at scale. With the time it takes to exploit a new vulnerability, combined with the lack of visibility into the asset attack surface area, companies are having a hard time addressing the vulnerabilities as quickly as they need. This is today's special CUBE program, where we're going to talk about these problems and how they're solved. Hello, everyone. I'm John Furrier, host of theCUBE. This is a special program called Managing Risk Across Your Extended Attack Surface Area with Armis, new asset intelligence platform. To start things off, let's bring in the co-founder and CTO of Armis, Nadir Izrael. Nadir, great to have you on the program. >> Yeah, thanks for having me. >> Great success with Armis. I want to just roll back and just zoom out and look at, what's the big picture? What are you guys focused on? What's the holy grail? What's the secret sauce? >> So Armis' mission, if you will, is to solve to your point literally one of the holy grails of security teams for the past decade or so, which is, what if you could actually have a complete, unified, authoritative asset inventory of everything, and stressing that word, everything. IT, OT, IoT, everything on kind of the physical space of things, data centers, virtualization, applications, cloud. What if you could have everything mapped out for you so that you can actually operate your organization on top of essentially a map? I like to equate this in a way to organizations and security teams everywhere seem to be running, basically running the battlefield, if you will, of their organization, without an actual map of what's going on, with charts and graphs. So we are here to provide that map in every aspect of the environment, and be able to build on top of that business processes, products, and features that would assist security teams in managing that battlefield. >> So this category, basically, is a cyber asset attack surface management kind of focus, but it really is defined by this extended asset attack surface area. What is that? Can you explain that? >> Yeah, it's a mouthful. I think the CAASM, for short, and Gartner do love their acronyms there, but CAASM, in short, is a way to describe a bit of what I mentioned before, or a slice out of it. It's the whole part around a unified view of the attack surface, where I think where we see things, and kind of where Armis extends to that is really with the extended attack surface. That basically means that idea of, what if you could have it all? What if you could have both a unified view of your environment, but also of every single thing that you have, with a strong emphasis on the completeness of that picture? If I take the map analogy slightly more to the extreme, a map of some of your environment isn't nearly as useful as a map of everything. If you had to, in your own kind of map application, you know, chart a path from New York to whichever your favorite surrounding city, but it only takes you so far, and then you sort of need to do the rest of it on your own, not nearly as effective, and in security terms, I think it really boils down into you can't secure what you can't see. And so from an Armis perspective, it's about seeing everything in order to protect everything. And not only do we discover every connected asset that you have, we provide a risk rating to every single one of them, we provide a criticality rating, and the ability to take action on top of these things. >> Having a map is huge. Everyone wants to know what's in their inventory, right, from a risk management standpoint, also from a vulnerability perspective. So I totally see that, and I can see that being the holy grail, but on the vulnerability side, you got to see everything, and you guys have new stuff around vulnerability management. What's this all about? What kind of gaps are you seeing that you're filling in the vulnerability side, because, okay, I can see everything. Now I got to watch out for threat vectors. >> Yeah, and I'd say a different way of asking this is, okay, vulnerability management has been around for a while. What the hell are you bringing into the mix that's so new and novel and great? So I would say that vulnerability scanners of different sorts have existed for over a decade. And I think that ultimately what Armis brings into the mix today is how do we fill in the gaps in a world where critical infrastructure is in danger of being attacked by nation states these days, where ransomware is an everyday occurrence, and where I think credible, up-to-the-minute, and contextualize vulnerability and risk information is essential. Scanners, or how we've been doing things for the last decade, just aren't enough. I think the three things that Armis excels at and completes the security staff today on the vulnerability management side are scale, reach, and context. Scale, meaning ultimately, and I think this is of no news to any enterprise, environments are huge. They are beyond huge. When most of the solutions that enterprises use today were built, they were built for thousands, or tens of thousands of assets. These days, we measure enterprises in the billions, billions of different assets, especially if you include how applications are structured, containers, cloud, all that, billions and billions of different assets, and I think that, ultimately, when the latest and greatest in catastrophic new vulnerabilities come out, and sadly, that's a monthly occurrence these days. You can't just now wait around for things to kind of scan through the environment, and figure out what's going on there. Real time images of vulnerabilities, real time understanding of what the risk is across that entire massive footprint is essential to be able to do things, and if you don't, then lots and lots of teams of people are tasked with doing this day in, day out, in order to accomplish the task. The second thing, I think, is the reach. Scanners can't go everywhere. They don't really deal well with environments that are a mixed IT/OT, for instance, like some of our clients deal with. They can't really deal with areas that aren't classic IT. And in general, these days over 70% of assets are in fact of the unmanaged variety, if you will. So combining different approaches from an Armis standpoint of both passive and active, we reach a tremendous scale, I think, within the environment, and ability to provide or reach that is complete. What if you could have vulnerability management, cover a hundred percent of your environment, and in a very effective manner, and in a very scalable manner? And the last thing really is context. And that's a big deal here. I think that most vulnerability management programs hinge on asset context, on the ability to understand, what are the assets I'm dealing with? And more importantly, what is the criticality of these assets, so I can better prioritize and manage the entire process along the way? So with these things in mind, that's what Armis has basically pulled out is a vulnerability management process. What if we could collect all the vulnerability information from your entire environment, and give you a map of that, on top of that map of assets? Connect every single vulnerability and finding to the relevant assets, and give you a real way to manage that automatically, and in a way that prevents teams of people from having to do a lot of grunt work in the process. >> Yeah, it's like building a search engine, almost. You got the behavioral, contextual. You got to understand what's going on in the environment, and then you got to have the context to what it means relative to the environment. And this is the criticality piece you mentioned, this is a huge differentiator in my mind. I want to unpack that. Understanding what's going on, and then what to pay attention to, it's a data problem. You got that kind of search and cataloging of the assets, and then you got the contextualization of it, but then what alarms do I pay attention to? What is the vulnerability? This is the context. This is a huge deal, because your businesses, your operation's going to have some important pieces, but also it changes on agility. So how do you guys do that? That's, I think, a key piece. >> Yeah, that's a really good question. So asset criticality is a key piece in being able to prioritize the operation. The reason is really simple, and I'll take an example we're all very, very familiar with, and it's been beaten to death, but it's still a good example, which is Log4j, or Log4Shell. When that came out, hundreds of people in large organizations started mapping the entire environment on which applications have what aspect of Log4j. Now, one of the key things there is that when you're doing that exercise for the first time, there are literally millions of systems in a typical enterprise that have Log4j in them, but asset criticality and the application and business context are key here, because some of these different assets that have Log4j are part of your critical business function and your critical business applications, and they deserve immediate attention. Some of them, or some Git server of some developer somewhere, don't warrant quite the same attention or criticality as others. Armis helps by providing the underlying asset map as a built-in aspect of the process. It maps the relationships and dependencies for you. It pulls together and clusters together. What applications does each asset serve? So I might be looking at a server and saying, okay, this server, it supports my ERP system. It supports my production applications to be able to serve my customers. It serves maybe my .com website. Understanding what applications each asset serves and every dependency along the way, meaning that endpoint, that server, but also the load balancers are supported, and the firewalls, and every aspect along the way, that's the bread and butter of the relationship mapping that Armis puts into place to be able to do that, and we also allow users to tweak, add information, connect us with their CMDB or anywhere else where they put this in, but once the information is in, that can serve vulnerability management. It can serve other security functions as well. But in the context of vulnerability management, it creates a much more streamlined process for being able to do the basics. Some critical applications, I want to know exactly what all the critical vulnerabilities that apply to them are. Some business applications, I just want to be able to put SLAs on, that this must be solved within a week, this must be solved within a month, and be able to actually automatically track all of these in a world that is very, very complex inside of an operation or an enterprise. >> We're going to hear from some of your customers later, but I want to just get your thoughts on, anecdotally, what do you hear from? You're the CTO, co-founder, you're actually going into the big accounts. When you roll this out, what are they saying to you? What are some of the comments? Oh my God, this is amazing. Thank you so much. >> Well, of course. Of course. >> Share some of the comments. >> Well, first of all, of course, that's what they're saying. They're saying we're great. Of course, always, but more specifically, I think this solves a huge gap for them. They are used to tools coming in and discovering vulnerabilities for them, but really close to nothing being able to streamline the truly complex and scalable process of being able to manage vulnerabilities within the environment. Not only that, the integration-led, designer-led deployment and the fact that we are a completely agent-less SaaS platform are extremely important for them. These are times where if something isn't easily deployable for an enterprise, its value is next to nothing. I think that enterprises have come to realize that if something isn't a one click deployment across the environment, it's almost not worth the effort these days, because environments are so complex that you can't fully realize the value any other way. So from an Armis standpoint, the fact that we can deploy with a few clicks, the fact that we immediately provide that value, the fact that we're agent-less, in the sense that we don't need to go around installing a footprint within the environment, and for clients who already have Armis, the fact that it's a flip of a switch, just turn it on, are extreme. I think that the fact, in particular, that Armis can be deployed. the vulnerability management can be deployed on top of the existing vulnerability scanner with a simple one-click integration is huge for them. And I think all of these together are what contribute to them saying how great this is. But yeah, that's it. >> The agent listing is huge. What's the alternative? What does it look like if they're going to go the other route, slow to deploy, have meetings, launch it in the environment? What's it look like? >> I think anything these days that touches an endpoint with an agent goes through a huge round of approvals before anything goes into an environment. Same goes, by the way, for additional scanners. No one wants to hear about additional scanners. They've already gone through the effort with some of the biggest tools out there to punch holes through firewalls, to install scanners in different ways. They don't want yet another scanner, or yet another agent. Armis rides on top of the existing infrastructure, the existing agents, the existing scanners. You don't need to do a thing. It just deploys on top of it, and that's really what makes this so easy and seamless. >> Talk about Armis research. Can you talk about, what's that about? What's going on there? What are you guys doing? How do you guys stay relevant for your customers? >> For sure. So one of the, I've made a lot of bold claims throughout, I think, the entire Q and A here, but one of the biggest magic components, if you will, to Armis that kind of help explain what all these magic components are, are really something that we call our collective asset knowledge base. And it's really the source of our power. Think of it as a giant collective intelligent that keeps learning from all of the different environments combined that Armis is deployed at. Essentially, if we see something in one environment, we can translate it immediately into all environments. So anyone who joins this or uses the product joins this collective intelligence in essence. What does that mean? It means that Armis learns about vulnerabilities from other environments. A new Log4j comes out, for instance. It's enough that, in some environments, Armis is able to see it from scanners, or from agents, or from SBOMs, or anything that basically provides information about Log4j, and Armis immediately infers or creates enrichment rules that act across the entire tenant base, or the entire client base of Armis. So very quick response to industry events, whenever something comes out, again, the results are immediate, very up to the minute, very up to the hour, but also I'd say that Armis does its own proactive asset research. We have a huge data set at our disposal, a lot of willing and able clients, and also a lot of partners within the industry that Armis leverages, but our own research is into interesting aspects within the environment. We do our own proactive research into things like TLStorm, which is kind of a bit of a bridging research and vulnerabilities between cyber physical aspect. So on the one hand, the cyber space and kind of virtual environments, but on the other hand, the actual physical space, vulnerabilities, and things like UPSs, or industrial equipment, or things like that. But I will say that also, Armis targets its research along different paths that we feel are underserved. We started a few years back research into firmwares, different types of real time operating systems. We came out with things like URGENT/11, which was research into, on the one hand, operating systems that run on two billion different devices worldwide, on the other hand, in the 40 years it existed, only 13 vulnerabilities were ever exposed or revealed about that operating system. Either it's the most secure operating system in the world, or it's just not gone through enough rigor and enough research in doing this. The type of active research we do is to complement a lot of the research going on in the industry, serve our clients better, but also provide kind of inroads, I think, for the industry to be better at what they do. >> Awesome, Nadir, thanks for sharing the insights. Great to see the research. You got to be at the cutting edge. You got to investigate, be ready for a moment's notice on all aspects of the operating environment, down to the hardware, down to the packet level, down to the any vulnerability, be ready for it. Great job. Thanks for sharing. Appreciate it. >> Absolutely. >> In a moment, Tim Everson's going to join us. He's the CSO of Kalahari Resorts and Conventions. He'll be joining me next. You're watching theCUBE, the leader in high tech coverage. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jun 21 2022

SUMMARY :

With the time it takes to What's the holy grail? in every aspect of the environment, management kind of focus, and the ability to take and I can see that being the holy grail, and manage the entire and cataloging of the assets, and every dependency along the way, What are some of the comments? Well, of course. and the fact that we are What's the alternative? of the biggest tools out there What are you guys doing? from all of the different on all aspects of the He's the CSO of Kalahari

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Tim EversonPERSON

0.99+

New YorkLOCATION

0.99+

John FurrierPERSON

0.99+

thousandsQUANTITY

0.99+

John FurrierPERSON

0.99+

NadirPERSON

0.99+

billionsQUANTITY

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

todayDATE

0.99+

40 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

GartnerORGANIZATION

0.99+

each assetQUANTITY

0.98+

second thingQUANTITY

0.98+

one clickQUANTITY

0.98+

13 vulnerabilitiesQUANTITY

0.98+

a weekQUANTITY

0.98+

over 70%QUANTITY

0.98+

millions of systemsQUANTITY

0.98+

oneQUANTITY

0.98+

two billion different devicesQUANTITY

0.97+

a monthQUANTITY

0.97+

one-clickQUANTITY

0.97+

bothQUANTITY

0.96+

Log4jTITLE

0.96+

hundred percentQUANTITY

0.96+

over a decadeQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

one environmentQUANTITY

0.94+

Log4ShellTITLE

0.93+

Managing Risk Across Your Extended Attack Surface AreaTITLE

0.91+

SBOMsORGANIZATION

0.89+

past decadeDATE

0.88+

threeQUANTITY

0.86+

hundreds of peopleQUANTITY

0.84+

CUBETITLE

0.84+

singleQUANTITY

0.82+

last decadeDATE

0.81+

CAASMTITLE

0.75+

CMDBTITLE

0.74+

billions of different assetsQUANTITY

0.72+

CAASMORGANIZATION

0.66+

URGENTORGANIZATION

0.65+

single vulnerabilityQUANTITY

0.65+

TLStormORGANIZATION

0.65+

Armis'ORGANIZATION

0.64+

GitTITLE

0.64+

11TITLE

0.63+

a few yearsDATE

0.61+

CTOPERSON

0.57+

the holy grailsQUANTITY

0.55+

assetsQUANTITY

0.55+

lotsQUANTITY

0.51+

clicksQUANTITY

0.5+

Nadir Izrael, Armis | Managing Risk with the Armis Platform


 

(upbeat music) >> Today's organizations are overwhelmed by the number of different assets connected to their networks, which now include not only IT devices and assets, but also a lot of unmanaged assets, like cloud, IoT, building management systems, industrial control systems, medical devices, and more. That's not just it, there's more. We're seeing massive volume of threats, and a surge of severe vulnerabilities that put these assets at risk. This is happening every day. And many, including me, think it's only going to get worse. The scale of the problem will accelerate. Security and IT teams are struggling to manage all these vulnerabilities at scale. With the time it takes to exploit a new vulnerability, combined with the lack of visibility into the asset attack surface area, companies are having a hard time addressing the vulnerabilities as quickly as they need. This is today's special CUBE program, where we're going to talk about these problems and how they're solved. Hello, everyone. I'm John Furrier, host of theCUBE. This is a special program called Managing Risk Across Your Extended Attack Surface Area with Armis, new asset intelligence platform. To start things off, let's bring in the co-founder and CTO of Armis, Nadir Izrael. Nadir, great to have you on the program. >> Yeah, thanks for having me. >> Great success with Armis. I want to just roll back and just zoom out and look at, what's the big picture? What are you guys focused on? What's the holy grail? What's the secret sauce? >> So Armis' mission, if you will, is to solve to your point literally one of the holy grails of security teams for the past decade or so, which is, what if you could actually have a complete, unified, authoritative asset inventory of everything, and stressing that word, everything. IT, OT, IoT, everything on kind of the physical space of things, data centers, virtualization, applications, cloud. What if you could have everything mapped out for you so that you can actually operate your organization on top of essentially a map? I like to equate this in a way to organizations and security teams everywhere seem to be running, basically running the battlefield, if you will, of their organization, without an actual map of what's going on, with charts and graphs. So we are here to provide that map in every aspect of the environment, and be able to build on top of that business processes, products, and features that would assist security teams in managing that battlefield. >> So this category, basically, is a cyber asset attack surface management kind of focus, but it really is defined by this extended asset attack surface area. What is that? Can you explain that? >> Yeah, it's a mouthful. I think the CAASM, for short, and Gartner do love their acronyms there, but CAASM, in short, is a way to describe a bit of what I mentioned before, or a slice out of it. It's the whole part around a unified view of the attack surface, where I think where we see things, and kind of where Armis extends to that is really with the extended attack surface. That basically means that idea of, what if you could have it all? What if you could have both a unified view of your environment, but also of every single thing that you have, with a strong emphasis on the completeness of that picture? If I take the map analogy slightly more to the extreme, a map of some of your environment isn't nearly as useful as a map of everything. If you had to, in your own kind of map application, you know, chart a path from New York to whichever your favorite surrounding city, but it only takes you so far, and then you sort of need to do the rest of it on your own, not nearly as effective, and in security terms, I think it really boils down into you can't secure what you can't see. And so from an Armis perspective, it's about seeing everything in order to protect everything. And not only do we discover every connected asset that you have, we provide a risk rating to every single one of them, we provide a criticality rating, and the ability to take action on top of these things. >> Having a map is huge. Everyone wants to know what's in their inventory, right, from a risk management standpoint, also from a vulnerability perspective. So I totally see that, and I can see that being the holy grail, but on the vulnerability side, you got to see everything, and you guys have new stuff around vulnerability management. What's this all about? What kind of gaps are you seeing that you're filling in the vulnerability side, because, okay, I can see everything. Now I got to watch out for threat vectors. >> Yeah, and I'd say a different way of asking this is, okay, vulnerability management has been around for a while. What the hell are you bringing into the mix that's so new and novel and great? So I would say that vulnerability scanners of different sorts have existed for over a decade. And I think that ultimately what Armis brings into the mix today is how do we fill in the gaps in a world where critical infrastructure is in danger of being attacked by nation states these days, where ransomware is an everyday occurrence, and where I think credible, up-to-the-minute, and contextualize vulnerability and risk information is essential. Scanners, or how we've been doing things for the last decade, just aren't enough. I think the three things that Armis excels at and completes the security staff today on the vulnerability management side are scale, reach, and context. Scale, meaning ultimately, and I think this is of no news to any enterprise, environments are huge. They are beyond huge. When most of the solutions that enterprises use today were built, they were built for thousands, or tens of thousands of assets. These days, we measure enterprises in the billions, billions of different assets, especially if you include how applications are structured, containers, cloud, all that, billions and billions of different assets, and I think that, ultimately, when the latest and greatest in catastrophic new vulnerabilities come out, and sadly, that's a monthly occurrence these days. You can't just now wait around for things to kind of scan through the environment, and figure out what's going on there. Real time images of vulnerabilities, real time understanding of what the risk is across that entire massive footprint is essential to be able to do things, and if you don't, then lots and lots of teams of people are tasked with doing this day in, day out, in order to accomplish the task. The second thing, I think, is the reach. Scanners can't go everywhere. They don't really deal well with environments that are a mixed IT/OT, for instance, like some of our clients deal with. They can't really deal with areas that aren't classic IT. And in general, these days over 70% of assets are in fact of the unmanaged variety, if you will. So combining different approaches from an Armis standpoint of both passive and active, we reach a tremendous scale, I think, within the environment, and ability to provide or reach that is complete. What if you could have vulnerability management, cover a hundred percent of your environment, and in a very effective manner, and in a very scalable manner? And the last thing really is context. And that's a big deal here. I think that most vulnerability management programs hinge on asset context, on the ability to understand, what are the assets I'm dealing with? And more importantly, what is the criticality of these assets, so I can better prioritize and manage the entire process along the way? So with these things in mind, that's what Armis has basically pulled out is a vulnerability management process. What if we could collect all the vulnerability information from your entire environment, and give you a map of that, on top of that map of assets? Connect every single vulnerability and finding to the relevant assets, and give you a real way to manage that automatically, and in a way that prevents teams of people from having to do a lot of grunt work in the process. >> Yeah, it's like building a search engine, almost. You got the behavioral, contextual. You got to understand what's going on in the environment, and then you got to have the context to what it means relative to the environment. And this is the criticality piece you mentioned, this is a huge differentiator in my mind. I want to unpack that. Understanding what's going on, and then what to pay attention to, it's a data problem. You got that kind of search and cataloging of the assets, and then you got the contextualization of it, but then what alarms do I pay attention to? What is the vulnerability? This is the context. This is a huge deal, because your businesses, your operation's going to have some important pieces, but also it changes on agility. So how do you guys do that? That's, I think, a key piece. >> Yeah, that's a really good question. So asset criticality is a key piece in being able to prioritize the operation. The reason is really simple, and I'll take an example we're all very, very familiar with, and it's been beaten to death, but it's still a good example, which is Log4j, or Log4Shell. When that came out, hundreds of people in large organizations started mapping the entire environment on which applications have what aspect of Log4j. Now, one of the key things there is that when you're doing that exercise for the first time, there are literally millions of systems in a typical enterprise that have Log4j in them, but asset criticality and the application and business context are key here, because some of these different assets that have Log4j are part of your critical business function and your critical business applications, and they deserve immediate attention. Some of them, or some Git server of some developer somewhere, don't warrant quite the same attention or criticality as others. Armis helps by providing the underlying asset map as a built-in aspect of the process. It maps the relationships and dependencies for you. It pulls together and clusters together. What applications does each asset serve? So I might be looking at a server and saying, okay, this server, it supports my ERP system. It supports my production applications to be able to serve my customers. It serves maybe my .com website. Understanding what applications each asset serves and every dependency along the way, meaning that endpoint, that server, but also the load balancers are supported, and the firewalls, and every aspect along the way, that's the bread and butter of the relationship mapping that Armis puts into place to be able to do that, and we also allow users to tweak, add information, connect us with their CMDB or anywhere else where they put this in, but once the information is in, that can serve vulnerability management. It can serve other security functions as well. But in the context of vulnerability management, it creates a much more streamlined process for being able to do the basics. Some critical applications, I want to know exactly what all the critical vulnerabilities that apply to them are. Some business applications, I just want to be able to put SLAs on, that this must be solved within a week, this must be solved within a month, and be able to actually automatically track all of these in a world that is very, very complex inside of an operation or an enterprise. >> We're going to hear from some of your customers later, but I want to just get your thoughts on, anecdotally, what do you hear from? You're the CTO, co-founder, you're actually going into the big accounts. When you roll this out, what are they saying to you? What are some of the comments? Oh my God, this is amazing. Thank you so much. >> Well, of course. Of course. >> Share some of the comments. >> Well, first of all, of course, that's what they're saying. They're saying we're great. Of course, always, but more specifically, I think this solves a huge gap for them. They are used to tools coming in and discovering vulnerabilities for them, but really close to nothing being able to streamline the truly complex and scalable process of being able to manage vulnerabilities within the environment. Not only that, the integration-led, designer-led deployment and the fact that we are a completely agent-less SaaS platform are extremely important for them. These are times where if something isn't easily deployable for an enterprise, its value is next to nothing. I think that enterprises have come to realize that if something isn't a one click deployment across the environment, it's almost not worth the effort these days, because environments are so complex that you can't fully realize the value any other way. So from an Armis standpoint, the fact that we can deploy with a few clicks, the fact that we immediately provide that value, the fact that we're agent-less, in the sense that we don't need to go around installing a footprint within the environment, and for clients who already have Armis, the fact that it's a flip of a switch, just turn it on, are extreme. I think that the fact, in particular, that Armis can be deployed. the vulnerability management can be deployed on top of the existing vulnerability scanner with a simple one-click integration is huge for them. And I think all of these together are what contribute to them saying how great this is. But yeah, that's it. >> The agent listing is huge. What's the alternative? What does it look like if they're going to go the other route, slow to deploy, have meetings, launch it in the environment? What's it look like? >> I think anything these days that touches an endpoint with an agent goes through a huge round of approvals before anything goes into an environment. Same goes, by the way, for additional scanners. No one wants to hear about additional scanners. They've already gone through the effort with some of the biggest tools out there to punch holes through firewalls, to install scanners in different ways. They don't want yet another scanner, or yet another agent. Armis rides on top of the existing infrastructure, the existing agents, the existing scanners. You don't need to do a thing. It just deploys on top of it, and that's really what makes this so easy and seamless. >> Talk about Armis research. Can you talk about, what's that about? What's going on there? What are you guys doing? How do you guys stay relevant for your customers? >> For sure. So one of the, I've made a lot of bold claims throughout, I think, the entire Q and A here, but one of the biggest magic components, if you will, to Armis that kind of help explain what all these magic components are, are really something that we call our collective asset knowledge base. And it's really the source of our power. Think of it as a giant collective intelligent that keeps learning from all of the different environments combined that Armis is deployed at. Essentially, if we see something in one environment, we can translate it immediately into all environments. So anyone who joins this or uses the product joins this collective intelligence in essence. What does that mean? It means that Armis learns about vulnerabilities from other environments. A new Log4j comes out, for instance. It's enough that, in some environments, Armis is able to see it from scanners, or from agents, or from SBOMs, or anything that basically provides information about Log4j, and Armis immediately infers or creates enrichment rules that act across the entire tenant base, or the entire client base of Armis. So very quick response to industry events, whenever something comes out, again, the results are immediate, very up to the minute, very up to the hour, but also I'd say that Armis does its own proactive asset research. We have a huge data set at our disposal, a lot of willing and able clients, and also a lot of partners within the industry that Armis leverages, but our own research is into interesting aspects within the environment. We do our own proactive research into things like TLStorm, which is kind of a bit of a bridging research and vulnerabilities between cyber physical aspect. So on the one hand, the cyber space and kind of virtual environments, but on the other hand, the actual physical space, vulnerabilities, and things like UPSs, or industrial equipment, or things like that. But I will say that also, Armis targets its research along different paths that we feel are underserved. We started a few years back research into firmwares, different types of real time operating systems. We came out with things like URGENT/11, which was research into, on the one hand, operating systems that run on two billion different devices worldwide, on the other hand, in the 40 years it existed, only 13 vulnerabilities were ever exposed or revealed about that operating system. Either it's the most secure operating system in the world, or it's just not gone through enough rigor and enough research in doing this. The type of active research we do is to complement a lot of the research going on in the industry, serve our clients better, but also provide kind of inroads, I think, for the industry to be better at what they do. >> Awesome, Nadir, thanks for sharing the insights. Great to see the research. You got to be at the cutting edge. You got to investigate, be ready for a moment's notice on all aspects of the operating environment, down to the hardware, down to the packet level, down to the any vulnerability, be ready for it. Great job. Thanks for sharing. Appreciate it. >> Absolutely. >> In a moment, Tim Everson's going to join us. He's the CSO of Kalahari Resorts and Conventions. He'll be joining me next. You're watching theCUBE, the leader in high tech coverage. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jun 17 2022

SUMMARY :

With the time it takes to What's the holy grail? in every aspect of the environment, management kind of focus, and the ability to take and I can see that being the holy grail, and manage the entire and cataloging of the assets, and every dependency along the way, What are some of the comments? Well, of course. and the fact that we are What's the alternative? of the biggest tools out there What are you guys doing? from all of the different on all aspects of the He's the CSO of Kalahari

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Tim EversonPERSON

0.99+

New YorkLOCATION

0.99+

thousandsQUANTITY

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

NadirPERSON

0.99+

billionsQUANTITY

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

todayDATE

0.99+

40 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

GartnerORGANIZATION

0.99+

each assetQUANTITY

0.98+

second thingQUANTITY

0.98+

one clickQUANTITY

0.98+

13 vulnerabilitiesQUANTITY

0.98+

a weekQUANTITY

0.98+

over 70%QUANTITY

0.98+

millions of systemsQUANTITY

0.98+

oneQUANTITY

0.98+

two billion different devicesQUANTITY

0.97+

a monthQUANTITY

0.97+

one-clickQUANTITY

0.97+

bothQUANTITY

0.96+

Log4jTITLE

0.96+

hundred percentQUANTITY

0.96+

over a decadeQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

one environmentQUANTITY

0.94+

Log4ShellTITLE

0.93+

Managing Risk Across Your Extended Attack Surface AreaTITLE

0.91+

SBOMsORGANIZATION

0.89+

past decadeDATE

0.88+

threeQUANTITY

0.86+

hundreds of peopleQUANTITY

0.84+

CUBETITLE

0.84+

singleQUANTITY

0.82+

last decadeDATE

0.81+

CAASMTITLE

0.75+

CMDBTITLE

0.74+

billions of different assetsQUANTITY

0.72+

CAASMORGANIZATION

0.66+

URGENTORGANIZATION

0.65+

single vulnerabilityQUANTITY

0.65+

TLStormORGANIZATION

0.65+

Armis'ORGANIZATION

0.64+

GitTITLE

0.64+

11TITLE

0.63+

a few yearsDATE

0.61+

CTOPERSON

0.57+

the holy grailsQUANTITY

0.55+

assetsQUANTITY

0.55+

lotsQUANTITY

0.51+

clicksQUANTITY

0.5+

ArmisPERSON

0.49+

Alex Schuchman, Armis | Managing Risk with the Armis Platform


 

>>Hello, Ron. Welcome back to the manage risk across your extended attack service area with Armas asset intelligence platform. I'm Sean furier host we're here at the CSO perspective, Alex Chuck bin, who is the CSO of Colgate Colgate Palm mall of company. Alex, thanks for coming on. >>Thanks for having >>Me, you know, unified visibility across the enterprise surface area is about knowing what you gotta protect. You can't protect what you can't see. Tell me more about how you guys are able to centralize your view with network assets with Armas. >>Yeah, I think the, the most important part of any security program is really visibility. And, and that's one of, kind of the building blocks. When you're building a security program, you need to understand what's in your environment. What's what you control, what is being introduced new into the environment. And that's really what any solution that gives you full visibility to your infrastructure, to your environment, to all the assets that are there, that that's really one of your bread and butter pieces to your security program. >>What's been the impact on your business? >>You know, I, I think from, from an it point of view, running the security program, you know, our key thing is really enabling the business to do their job better. So if we can give them visibility into all the assets that are available in their individual environments, and we're doing that in an automated fashion with no manual collection, you know, that's yet another thing that they don't have to worry about. And then we're delivering because really it is an enabler for the business. And then they can focus really on what their job is, which is to, to deliver product. >>Yeah. And a lot of changes in their network. You got infrastructure, you got OT devices, OT devices. So vulnerability management becomes more important. It's been around for a while, but it's not just it devices anymore. There are gaps in vulnerability across the OT network. What can you tell us about Colgate's use of Armas as vulnerability management? What can you, can you see now what you couldn't you see before? Can you share your thoughts on this? >>Yeah, I, I think what's really interesting about the, the kind of manufacturing environments today is if you look back a number of years, most of the manufacturing equipment was really disconnected from the internet. It was really running in silos. So it was very easy to protect equipment that, that isn't internet connected. You could put a firewall, you could segment it off. And it was, it was really on an island on its own. Nowadays you have a lot of IOT devices. You have a lot of internet connected devices, sensors providing information to multiple different suppliers or vendor solutions. And you have to really then open up your ecosystem more, which of course means you have to change your security posture and you really have to embrace. If there's a vulnerability with one of those suppliers, then how do you mitigate the risk associated to vulnerability? Armas really helps us get a lot of information so that we can then make a decision with our business teams. >>That whole operational aspect of criticality is huge. How on the assets knowing what's what's key? How has that changed your, the, the security workload for you guys? >>Yeah, for us, I mean, it, it's all about being efficient. If we can have the, the visibility across our manufacturing environments, then, then my team can easily consume that information. You know, if we spend a lot of time trying to digest the information, trying to process it, trying to prioritize it, that, that, that really hurts our efficiency as, as a team where as a function, what we really like is being able to use technology to help us do that work. We're, we're not an it shop. We're a manufacturing shop, but we're a very technical shop so that we like to drive everything through automation and not be a bottleneck for any of the, the actions that take place. >>You know, the old expression is the juice worth. The squeeze. It comes up a lot when people are buying tools around vulnerability management and point, all this stuff. So SAS solution is key with no agents to deploy. They have that talk about how you operationalize Armas in your environment, how quickly did it AC achieve time to value, take us through that, that consumption of the product. And, and, and what was the experience like? >>Yeah, I I'll definitely say a in, in the security ecosystem that that's one of the, the biggest promises you hear across the industry. And when, when we started with Armas, we started with a very small deployment and we wanted to make sure if, if it was really worth the lift to your point, we implemented the, the first set of plants very quickly, actually, even quicker than we had put in our project plan, which is, is not typical for implementing complex security solutions. And then we were so successful with that. We expanded to cover more of our manufacturing plants, and we were able to get really true visibility across our entire manufacturing organization in the first year with the ability to also say that we extended that, that information, that visibility to our manufacturing organization, and they could also consume it just as easily as we could. >>That's awesome. How many assets did you guys discover? Just curious on the numbers? >>Oh, that, that's the really interesting part, you know, before we started this project, we would've had to do a, a manual audit of, of our plants, which is typical in, in our industry. You know, when, when we started this project and, and we put in estimates, we really, really didn't have a great handle on what we were gonna find. And what's really nice about the Arma solution is it it's truly giving you full visibility. So you're actually seeing, besides the servers and the PLCs and all the equipment that you're familiar with, you're also connecting it to your wireless access points. You're connecting it to see any of those IOT devices as well. And then you're really getting full visibility through all the integrations that they offer. You're amazed how many devices you're actually seeing across your entire ecosystem. >>It's like Google maps for your infrastructure. You get little street view. You wanna look at it, you get the, you know, fake tree in there, whatever, but it gives you the picture that's key, >>Correct. And with a nice visualization and an easy search engine, similar to your, your Google analogy, you know, everything is, is, is really at your fingertips. If you wanna find something, you just go to the search bar, click a couple entries and, and boom, you get your, your list of the associated devices or the, the associated locations devices. >>Well, I appreciate your time. I know you're super busy at CSIG a lot of your plate. Thanks for coming on sharing. Appreciate it. >>No problem, John. Thanks for having me. >>Okay. In a moment, Brian Inman, a sales engineer at Armas will be joining me. You're watching the cube, the leader in high tech coverage. Thanks for watching.

Published Date : Jun 17 2022

SUMMARY :

Hello, Ron. Welcome back to the manage risk across your extended attack service area with Armas asset intelligence Tell me more about how you guys are able to centralize your And that's really what any solution that gives you full visibility you know, our key thing is really enabling the business to Can you share your thoughts on this? And you have to really then open up your ecosystem How on the assets knowing You know, if we spend a lot of time trying to digest the information, They have that talk about how you operationalize Armas in that that's one of the, the biggest promises you hear across the How many assets did you guys discover? Oh, that, that's the really interesting part, you know, before we started this You wanna look at it, you get the, If you wanna find something, you just go to the search bar, click a couple I know you're super busy at CSIG a lot of your plate. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RonPERSON

0.99+

Alex SchuchmanPERSON

0.99+

Brian InmanPERSON

0.99+

JohnPERSON

0.99+

AlexPERSON

0.99+

Alex Chuck binPERSON

0.99+

ArmasORGANIZATION

0.99+

Sean furierPERSON

0.99+

ColgateORGANIZATION

0.99+

first setQUANTITY

0.98+

Google mapsTITLE

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

ArmisORGANIZATION

0.97+

first yearQUANTITY

0.96+

GoogleORGANIZATION

0.96+

CSIGORGANIZATION

0.94+

Colgate Colgate PalmORGANIZATION

0.92+

couple entriesQUANTITY

0.6+

SASORGANIZATION

0.51+

ArmasTITLE

0.5+

yearsQUANTITY

0.46+

Breaking Analysis: How Lake Houses aim to be the Modern Data Analytics Platform


 

from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante earnings season has shown a conflicting mix of signals for software companies well virtually all firms are expressing caution over so-called macro headwinds we're talking about ukraine inflation interest rates europe fx headwinds supply chain just overall i.t spend mongodb along with a few other names appeared more sanguine thanks to a beat in the recent quarter and a cautious but upbeat outlook for the near term hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis ahead of mongodb world 2022 we drill into mongo's business and what etr survey data tells us in the context of overall demand and the patterns that we're seeing from other software companies and we're seeing some distinctly different results from major firms these days we'll talk more about [ __ ] in this session which beat eps by 30 cents in revenue by more than 18 million dollars salesforce had a great quarter and its diversified portfolio is paying off as seen by the stocks noticeable uptick post earnings uipath which had been really beaten down prior to this quarter it's brought in a new co-ceo and it's business is showing a nice rebound with a small three cent eps beat and a nearly 20 million dollar top line beat crowdstrike is showing strength as well meanwhile managements at microsoft workday and snowflake expressed greater caution about the macroeconomic climate and especially on investors minds his concern about consumption pricing models snowflake in particular which had a small top-line beat cited softness and effects from reduced consumption especially from certain consumer-facing customers which has analysts digging more deeply into the predictability of their models in fact barclays analyst ramo lenchow published an especially thoughtful piece on this topic concluding that [ __ ] was less susceptible to consumption headwinds than for example snowflake essentially for a few reasons one because atlas mongo's cloud managed service which is the consumption model comprises only about 60 percent of mongo's revenue second is the premise that [ __ ] is supporting core operational applications that can't be easily dialed down or turned off and three that snowflake customers it sounds like has a more concentrated customer base and due to that fact there's a preponderance of its revenue is consumption driven and would be more sensitive to swings in these consumption patterns now i'll say this first consumption pricing models are here to stay and the much preferred model for customers is consumption the appeal of consumption is i can actually dial down turn off if i need to and stop spending for a while which happened or at least happened to a certain extent this quarter for certain companies but to the point about [ __ ] supporting core applications i do believe that over time you're going to see the increased emergence of data products that will become core monetization drivers in snowflake along with other data platforms is going to feed those data products and services and become over time maybe less susceptible and less sensitive to these consumption patterns it'll always be there but i think increasingly it's going to be tied to operational revenue last two points here in this slide software evaluations have reverted to their historical mean which is a good thing in our view we've taken some air out of the bubble and returned to more normalized valuations was really predicted and looked forward to look we're still in a lousy market for stocks it's really a bear market for tech the market tends to be at least six months ahead of the economy and often not always but often is a good predictor we've had some tough compares relative to the pandemic days in tech and we'll be watching next quarter very closely because the macro headwinds have now been firmly inserted into the guidance of software companies okay let's have a look at how certain names have performed relative to a software index benchmark so far this year here's a year-to-date chart comparing microsoft salesforce [ __ ] and snowflake to the igv software heavy etf which is shown in the darker blue line which by the way it does not own the ctf does not own snowflake or [ __ ] you can see that these big super caps have fared pretty well whereas [ __ ] and especially snowflake those higher growth companies have been much more negatively impacted year to date from a stock price standpoint now let's move on let's take a financial snapshot of [ __ ] and put it next to snowflake so we can compare these two higher growth names what we've done here in this chart has taken the most recent quarters revenue and multiplied it by 4x to get a revenue run rate and we've parenthetically added a projection for the full year revenue [ __ ] as you see will do north of a billion dollars in revenue while snowflake will begin to approach three billion dollars 2.7 and run right through that that four quarter run rate that they just had last quarter and you can see snowflake is growing faster than [ __ ] at 85 percent this past quarter and we took now these most of these profit of these next profitability ratios off the current quarter with one exception both companies have high gross margins of course you'd expect that but as we've discussed not as high as some traditional software companies in part because of their cloud costs but also you know their maturity or lack thereof both [ __ ] and snowflake because they are in high growth mode have thin operating margins they spend nearly half or more than half of their revenue on growth that's the sg a line mostly the s the sales and marketing is really where they're spending money uh and and they're specialists so they spend a fair amount of their revenue on r d but maybe not as high as you might think but a pretty hefty percentage the free cash flow as a percentage of revenue line we calculated off the full year projections because there was a kind of an anomaly this quarter in the in the snowflake numbers and you can see snowflakes free cash flow uh which again was abnormally high this quarter is going to settle in around 16 this year versus mongo's six percent so strong focus by snowflake on free cash flow and its management snowflake is about four billion dollars in cash and marketable securities on its balance sheet with little or no debt whereas [ __ ] has about two billion dollars on its balance sheet with a little bit of longer term debt and you can see snowflakes market cap is about double that of mongos so you're paying for higher growth with snowflake you're paying for the slootman scarpelli execution engine the expectation there a stronger balance sheet etc but snowflake is well off its roughly 100 billion evaluation which it touched during the peak days of tech during the pandemic and just that as an aside [ __ ] has around 33 000 customers about five times the number of customers snowflake has so a bit of a different customer mix and concentration but both companies in our view have no lack of market in terms of tam okay now let's dig a little deeper into mongo's business and bring in some etr data this colorful chart shows the breakdown of mongo's net score net score is etr's proprietary methodology that measures the percent of customers in the etr survey that are adding the platform new that's the lime green at nine percent existing customers that are spending six percent or more on the platform that's the forest green at 37 spending flat that's the gray at 46 percent decreasing spend that's the pinkish at around 5 and churning that's only 3 that's the bright red for [ __ ] subtract the red from the greens and you net out to a 38 which is a very solid net score figure note this is a survey of 1500 or so organizations and it includes 150 mongodb customers which includes by the way 68 global 2000 customers and they show a spending velocity or a net score of 44 so notably higher among the larger clients and while it's a smaller sample only 27 emea's net score for [ __ ] is 33 now that's down from 60 last quarter note that [ __ ] cited softness in its european business on its earning calls so that aligns to the gtr data okay now let's plot [ __ ] relative to some other data platforms these don't all necessarily compete head to head with [ __ ] but they are in data and database platforms in the etr data set and that's what this chart shows it's an xy graph with net score or as we say spending momentum on the vertical axis and overlap or presence or pervasiveness in the data set on the horizontal axis see that red dotted line there at 40 that indicates an elevated level of spending anything above that is highly elevated we've highlighted [ __ ] in that red box which is very close to that 40 percent line it has a pretty strong presence on the x-axis right there with gcp snowflake as we've reported has come down to earth but still well elevated again that aligns with the earnings releases uh aws and microsoft they have many data platforms especially aws so their plot position reflects their broad portfolio massive size on the x-axis um that's the presence and and very impressive on the vertical axis so despite that size they have strong spending momentum and you can see the pack of others including cockroach small on the verdict on the horizontal but elevated on the vertical couch base is creeping up since its ipo redis maria db which was launched the day that oracle bought sun and and got my sequel and some legacy platforms including the leader in database oracle as well as ibm and teradata's both cloud and on-prem platforms now one interesting side note here is on mongo's earning call it clearly cited the advantages of its increasingly all-in-one approach relative to others that offer a portfolio of bespoke or what we some sometimes call horses for courses databases [ __ ] cited the advantages of its simplicity and lower costs as it adds more and more functionality this is an argument often made by oracle and they often target aws as the company with too many databases and of course [ __ ] makes that argument uh as well but they also make the argument that oracle they don't necessarily call them out but they talk about traditional relational databases of course they're talking about oracle and others they say that's more complex less flexible and less appealing to developers than is [ __ ] now oracle of course would retur we retort saying hey we now support a mongodb api so why go anywhere else we're the most robust and the best for mission critical but this gives credence to the fact that if oracle is trying to capture business by offering a [ __ ] api for example that [ __ ] must be doing something right okay let's look at why they buy [ __ ] here's an etr chart that addresses that question it's it's mongo's feature breadth is the number one reason lower cost or better roi is number two integrations and stack alignment is third and mongo's technology lead is fourth those four kind of stand out with notice on the right hand side security and vision much lower there in the right that doesn't necessarily mean that [ __ ] doesn't have good security and and good vision although it has been cited uh security concerns um and and so we keep an eye on that but look [ __ ] has a document database it's become a viable alternative to traditional relational databases meaning you have much more flexibility over your schema um and in fact you know it's kind of schema-less you can pretty much put anything into a document database uh developers seem to love it generally it's fair to say mongo's architecture would favor consistency over availability because it uses a single master architecture as a primary and you can create secondary nodes in the event of a primary failure but you got to think about that and how to architect availability into the platform and got to consider recovery more carefully now now no schema means it's not a tables and rows structure and you can again shove anything you want into the database but you got to think about how to optimize performance um on queries now [ __ ] has been hard at work evolving the platform from the early days when you go back and look at its roadmap it's been you know started as a document database purely it added graph processing time series it's made search you know much much easier and more fundamental it's added atlas that fully managed cloud database uh service which we said now comprises 60 of its revenue it's you know kubernetes integrations and kind of the modern microservices stack and dozens and dozens and dozens of other features mongo's done a really fine job we think of creating a leading database platform today that is loved by customers loved by developers and is highly functional and next week the cube will be at mongodb world and we'll be looking for some of these items that we're showing here and this this chart this always going to be main focus on developers [ __ ] prides itself on being a developer friendly platform we're going to look for new features especially around security and governance and simplification of configurations and cluster management [ __ ] is likely going to continue to advance its all-in-one appeal and add more capabilities that reduce the need to to spin up bespoke platforms and we would expect enhance enhancements to atlas further enhancements there is atlas really is the future you know maybe adding you know more cloud native features and integrations and perhaps simplified ways to migrate to the cloud to atlas and improve access to data sources generally making the lives of developers and data analysts easier that's going to be we think a big theme at the event so these are the main things that we'll be scoping out at the event so please stop by if you're in new york city new york city at mongodb world or tune in to thecube.net okay that's it for today thanks to my colleagues stephanie chan who helps research breaking analysis from time to time alex meyerson is on production as today is as is andrew frick sarah kenney steve conte conte anderson hill and the entire team in palo alto thank you kristen martin and cheryl knight helped get the word out and rob hof is our editor-in-chief over there at siliconangle remember all these episodes are available as podcasts wherever you listen just search breaking analysis podcast we do publish each week on wikibon.com and siliconangle.com want to reach me email me david.velante siliconangle.com or dm me at divalante or a comment on my linkedin post and please do check out etr.ai for the best survey data in the enterprise tech business this is dave vellante for the cube insights powered by etr thanks for watching see you next time [Music] you

Published Date : Jun 3 2022

SUMMARY :

into the platform and got to consider

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
nine percentQUANTITY

0.99+

30 centsQUANTITY

0.99+

six percentQUANTITY

0.99+

46 percentQUANTITY

0.99+

ramo lenchowPERSON

0.99+

new yorkLOCATION

0.99+

next weekDATE

0.99+

thecube.netOTHER

0.99+

85 percentQUANTITY

0.99+

microsoftORGANIZATION

0.99+

40 percentQUANTITY

0.99+

six percentQUANTITY

0.99+

cherylPERSON

0.99+

andrew frickPERSON

0.99+

three billion dollarsQUANTITY

0.99+

more than 18 million dollarsQUANTITY

0.99+

dave vellantePERSON

0.99+

oracleORGANIZATION

0.99+

this yearDATE

0.99+

stephanie chanPERSON

0.99+

alex meyersonPERSON

0.99+

next quarterDATE

0.99+

37QUANTITY

0.99+

44QUANTITY

0.99+

last quarterDATE

0.99+

bostonLOCATION

0.99+

60QUANTITY

0.99+

both companiesQUANTITY

0.99+

38QUANTITY

0.99+

david.velanteOTHER

0.99+

todayDATE

0.99+

about two billion dollarsQUANTITY

0.99+

dozensQUANTITY

0.98+

about four billion dollarsQUANTITY

0.98+

rob hofPERSON

0.98+

33QUANTITY

0.98+

firstQUANTITY

0.98+

each weekQUANTITY

0.98+

around 33 000 customersQUANTITY

0.98+

27QUANTITY

0.98+

secondQUANTITY

0.98+

4xQUANTITY

0.97+

150 mongodb customersQUANTITY

0.97+

threeQUANTITY

0.97+

more than halfQUANTITY

0.97+

fourthQUANTITY

0.96+

awsORGANIZATION

0.96+

this weekDATE

0.96+

bothQUANTITY

0.96+

nearly 20 million dollarQUANTITY

0.96+

anderson hillPERSON

0.96+

2022DATE

0.95+

palo altoORGANIZATION

0.94+

mongoORGANIZATION

0.94+

sarah kenneyPERSON

0.94+

kristen martinPERSON

0.93+

about 60 percentQUANTITY

0.93+

oneQUANTITY

0.93+

40QUANTITY

0.93+

one exceptionQUANTITY

0.93+

2.7QUANTITY

0.93+

thirdQUANTITY

0.93+

fourQUANTITY

0.93+

atlasTITLE

0.92+

two higher growthQUANTITY

0.92+

about five timesQUANTITY

0.92+

3QUANTITY

0.91+

etrORGANIZATION

0.91+

pandemicEVENT

0.91+

atlas mongoORGANIZATION

0.91+

this quarterDATE

0.9+

ukraineLOCATION

0.9+

siliconangle.comOTHER

0.89+

2000 customersQUANTITY

0.88+

paloLOCATION

0.88+

around 5QUANTITY

0.87+

Amit Zavery, VP GM and Head of Platform, Google Cloud


 

>> Welcome back to Cube On Cloud. My name is Paul Gillin, enterprise editor at SiliconANGLE, and I'm pleased to now have as a guest on the show. Amit Zephyr, excuse me, general manager, vice president of business application platform at Google cloud. Amit is a formerly EVP and corporate officer for product development at Oracle cloud, 24 years at Oracle, and by my account a veteran of seven previous appearances on theCube. Amit welcome, thanks for joining us. >> Thanks for having me Paul, it's always good to be back on theCube. >> Now you are... one of your big focus areas right now is on low-code and no-code. Of course this is a market that seems to be growing explosively. We often hear low code/no code used in the same breath as if they're the same thing. In fact, how are they different? >> I think it's a huge difference, now. I think industry started as local mode for many, many years. I mean, there were technologies, or tools provided for kind of helping developers be more productive that's what low-code was doing. It was not really meant, even though it was positioned for citizen developers it was very hard for a non technologist to really build application using low code. No-code is really meant as the word stands, no code. So there's really no coding, there's no understanding required about the underlying technology stack, or knowing how constructs works or how the data is laid out. All that stuff is kind of hidden and abstracted out from you. You are really focused as a citizen developer or a line of business user, in kind of delivering what your business application requirements are, and the business flows are, without having to know anything about writing any code. So you can build applications, you can build your interfaces and not have to learn anything about a single line of code. So that's really no-code and I think they getting to a phase now where the platforms have gotten much stronger and better where you can do very good productive applications without having to write a single line of code. So that's really the goal with no-code, and that's really the future in terms of how we will get more and more line of business users, or citizen developers to build applications they need for their day-to-day work. >> So when would you use one or the other? >> I think since low-code you would probably any developer has been around for eight, 10 years, if not longer where you extract out some of this stuff you can do some of the things in terms of not having to write some code where you have a lot of modules pre-built for you, and then when you want to mix a lot of changes, you go and drop into an ID and write some code or make some changes to a code. So you still get into that, and those are really focused towards semi-professional developers or IT in many cases or even developers who want to reduce the time required to start from, write and building an application. so it makes you much more productive. So if you are a really some semi-professional or you are a developer, you can either use use low-code to improve your productivity and not start from scratch. No-code is really used for folks who are really not interested in learning about coding, don't have any experience in it, and still want to be productive and build applications. And that's really when I would start with.. I would not give a low code to a citizen developer or a line of business user who has no experience with any coding. And that's not really.. It will only productive, They'll get frustrated and not deliver what you need, and not get anything out of it and many cases. >> Well, I've been around this industry long enough to remember fourth-generation languages and visual basic >> Yeah and the predecessors that never really caught on in a big way. I mean, they certainly had big audiences but, right now we're seeing 40, 50% annual market growth. Why is this market suddenly so hot? >> Yeah it's not a difference. I think that as you said, the 4G deal and I think a lot of those tools, even if you look at forms, and PLC and we kind of extracted out the technology and made it easier, but it was not very clear who they were targeting with that. They're still targeting the same developer audience. So the they never expanded the universe of users. It was same user base, just making it simpler for them. So, with those low-code tools, it never landed them getting more and more user base out of that. With no-code platforms, you are now expanding the user community. You are giving this capabilities to more and more users than a low-code tools could provide. That's why I think the growth is much faster. So if you find the right no-code platform, you will see a lot more adoption because you're solving a real problem, you are giving them a lot more capabilities and making the user productive without having to depend on IT in many cases, or having to wait for a lot of those big applications to be built for them even though they need it immediately. So I think that's why I think you're solving a real business problem and giving a lot more capabilities to users and no doubt the users love it and they start expanding the usage. It's very viral adoption in many cases after that. >> Historically the rap on these tools has been that, because they're typically interpreted, the performance is never going to be up to that of application written in C plus plus or something. Is that still the case? Is that a sort of structural weakness of no-code tools or is that changing? >> I think the early days probably not any more. I think if you look at what we are doing at Google Cloud for example, it's not interpreted, I mean, it does do a lot of heavy lifting underneath the covers, but, and you don't have to go into the coding part of it but it brings the whole Cloud platform with it, right? So the scalability, the security the performance, availability all that stuff is built into the platform. So it's not a tool, it's a platform. I think that's thing, the big difference. Most of the early days you will see a lot of these things as a tool, which you can use it, and there's nothing underneath the covers the run kinds are very weak, there's really not the full Cloud platform provided with it, but I think the way we seeing it now and over the last many years, what we have done and what we continue to do, is to bring the power of the Cloud platform with it. So you're not missing out on the scalability, the performance, security, even the compliance and governance is built in. So IT is part of the process even though they might not build an application themselves. And that's where I think the barriers have been lifted. And again, it's not a solution for everything also. I'm not saying that this would go in, if you want to build a full end to end e-commerce site for example, I would not use a no-code platform for it, because you're going to do a lot more heavy lifting, you might want to integrate with a lot of custom stuff, you might build a custom experience. All that kind of stuff might not be that doable, but there are a lot of use cases now, which you can deliver with a platform like what we've been building at Google cloud. >> So, talk about what you're doing at Google cloud. Do you have a play in both the low-code and the no-code market? Do you favor one over the other? >> Yeah no I think we've employed technologies and services across the gamut of different requirements, right? I mean, our goal is not that we will only address one market needs and we'll ignore the rest of the things required for our developer community. So as you know, Google cloud has been very focused for many years delivering capabilities for developer community. With technology we deliver the Kubernetes and containers tend to flow for AI, compute storage all that kind of stuff is really developer centric. We have a lot of developers build applications on it writing code. They have abstracted some of this stuff and provide a lot of low-code technologies like Firebase for building mobile apps, the millions of apps mobile apps built by developers using Firebase today that it does abstract out the technology. And then you don't have to do a lot of heavy lifting yourself. So we do provide a lot of low-code tooling as well. And now, as we see the need for no-code especially kind of empowering the line of business user and citizen developers, we acquired a company called AppSheet, early 2020, and integrated that as part of our Google Cloud Platform as well as the workspace. So the G suite, the Gmail, all the technology all the services we provide for productivity and collaboration. And allowed users to now extend that collaboration capabilities by adding a workflow, and adding another app experience as needed for a particular business user needs. So that's how we looking at it like making sure that we can deliver a platform for spectrum of different use cases. And get that flexibility for the end user in terms of whatever they need to do, we should be able to provide as part of a Google Cloud Platform now. >> So as far as Google Cloud's positioning, I mean you're number three in the market you're growing but not really changing the distance between you and Microsoft for what public information we've been able to see in AWS. In Microsoft you have a company that has a long history with developers and of development tools and really as is that as a core strength do you see your low-code/no-code strategy as being a way to make up ground on them? >> Yeah, I think that the way to look at the market, and again I know the industry analyst and the market loves to do rankings in this world but, I think the Cloud business is probably big enough for a lot of vendors. I mean, this is growing as the amazing pace as you know. And it is becoming, it's a large investment. It takes time for a lot of the vendors to deliver everything they need to. But today, if you look at a lot of the net new growth and lot of net new customers, we seeing a huge percentage of share coming to Google Cloud, right? And we continue to announce some of the public things and the results will come out again every quarter. And we tried to break out the Cloud segment in the Google results more regularly so that people get an idea of how well they're doing in the Cloud business. So we are very comfortable where we are in terms of our growth in terms of our adoption, as well as in terms of how we delivering all the value our customers require, right? So, note out one of the parts we want to do is make sure that we have a end to end offering for all of the different use cases customers require and no-code is one of the parts we want to deliver for our customers as well. We've done very good capabilities and our data analytics. We do a lot of work around AIML, industry solutions. You look at the adoption we've had around a lot of those platform and Hybrid and MultiCloud. It's been growing very, very fast. And this one more additional things we are going to do, so that we can deliver what our customers are asking for. We're not too worried about the rankings we are worried about really making sure we're delivering the value to our customers. And we're seeing that it doesn't end very well. And if you look at the numbers now, I mean the growth rate is higher than any other Cloud vendor as well as be seeing a huge amount of demand been on Google Cloud as well. >> Well, not to belabor the point, but naturally your growth rate is going to be higher if you're a third of the market, I mean, how important is it to you to break into, to surpass the number two? How important are rankings within the Google Cloud team, or are you focused mainly more on growth and just consistency? >> No, I don't think again, I'm not worried about... we are not focused on ranking, or any of that stuff typically, I think we were worried about making sure customers are satisfied, and the adding more and more customers. So if you look at the volume of customers we're signing up, a lot of the large deals they didn't... do we need to look at the announcement we'd made over the last year, has been tremendous momentum around that. Lot of large banks, lot of large telecommunication companies large enterprises, name them. I think all of them are starting to kind of pick up Google Cloud. So if you follow that, I think that's really what is satisfying for us. And the results are starting to show that growth and the momentum. So we can't cover the gap we had in the previous... Because Google Cloud started late in this market. So if Cloud business grows by accumulating revenue over many years. So I cant look at the history, I'm looking at the future really. And if you look at the growth for the new business and the percentage of the net new business, we're doing better than pretty much any other vendor out there. >> And you said you were stepping up your reference to disclose those numbers. Was that what I heard you say? >> I think every quarter you're seeing that, I think we started announcing our revenue and growth numbers, and we started to do a lot of reporting about our Cloud business and that you will start, you see more and more and more of that regularly from Google now. >> Let's get back just briefly to the low-code/no-code discussion. A lot of companies looking at how to roll this out right now. You've got some big governance issues involved here. If you have a lot of citizen developers you also have the potential for chaos. What advice are you giving customers using your tools for how they should organize around citizen development? >> Yeah, no, I think no doubt. If this needs to be adopted by enterprise you can't make it a completely rogue or a completely shadow based development capabilities. So part of our no-code platform, one thing you want to make sure that this is enterprise ready, it has many aspects required for that. One is compliance making sure you have all the regulatory things delivered for data, privacy, security. Second is governance. A lot of the IT departments want to make sure who's using this platform? How are they accessing it? Are they getting the right security privileges associated with that? Are we giving them the right permissions? So in our a no-code platform we adding all this compliance, and governance regulatory stuff as part of our underlying platform, even though the end user might not have to worry about it the person who's building applications shouldn't have to think about it, but we do want to give controls to IT as needed by the large enterprises. So that is a big part of how we deliver this. We're not thinking about this as like go and build it, and then we write it once you have to do things for your enterprise, and then get it to do it again and again. Because then it just a waste of time and you're not getting the benefit of the platform at all. So we bringing those things together where we have a very easy to use, very powerful no-code platform with the enterprise compliance as well as governance built into that platform as well. And that is really resonating. If you look at a lot of the customers we're working with they do require that and they get excited about it as well as the democratizing of all of their line of business users. They're very happy that they're getting that kind of a platform, which they can scale from and deliver the productivity required. >> Certainly going to make businesses look very different in the future. And speaking of futures, It is January it's time to do predictions. What are your predictions (laughs) for the Cloud for this year? >> No I think that I mean no doubt cloud has become the center for pretty much every company now, I think the digital transformation especially with COVID, has greatly accelerated. We have seen many customers now who are thinking of pieces of their platform, pieces of their workflow or business to be digitized. Now that's trying to do it for all of it. So the one part which we see for this year is the need for more and more of efficiency in the industry are verticalized business workflows. It's not just about providing a plain vanilla Cloud Platform but also providing a lot more content and business details and business workflows by industry segments. So we've been doing a lot of work and we expect a huge amount of that to be becoming more and more core part of our offering as well as what customers are asking for. Where you might need things around say know your customer kind of workflow for financial services, Telehealth for healthcare. I mean, every industry has specific things like demand management and demand forecasting for retail but making that as part of a Cloud service not just saying, hey, I have compute storage network. I have some kind of a platform go add it and go and build what you want for your industry needs, We want to provide them that all those kinds of business processes and content for those industries as well. So we identified six, seven, industries. We see that as a kind of the driving factor for our Cloud growth, as well as helping our customers be much more productive as well as seeing the value of Cloud being much more realistic for them versus just a replacement for the data center. I think that's really the big shift in 21 I think. And I think that will make a big difference for all the companies who are really trying to digitize and be in forefront of the needs as their customers require in the future. >> Of course all of this accelerated by the pandemic and all of the specialized needs that have emerged from that. >> And I think the bond, which is important as well, I think as you know, I mean, everybody talks about AIML as like a big thing. No doubt AIML is an important element of it, but if you make that usable and powerful through this kind of workflows and business processes, as well as particular business applications, I think you see a lot more interest in using it than just a plain manila framework or just technology for the technology sake. So we try to bring the power of AI and ML into this business and industry applications, where we have a lot of good technologists at Google who knows how to use all these things. You wanted to bring that into those applications and platforms >> Exciting times ahead. Amit Zavery thank you so much for joining us. You look just as comfortable as I would expect someone to be who is doing his eighth Cube interview. Thanks for joining us. >> (laughing) Thanks for having me, Paul. >> That's it for this segment of Cube On Cloud, I'm Paul Gillin, stay tuned. (soft music)

Published Date : Jan 8 2021

SUMMARY :

as a guest on the show. it's always good to be back on theCube. that seems to be growing explosively. and that's really the future and then when you want and the predecessors and making the user productive the performance is never going to be up to and over the last many years, and the no-code market? And get that flexibility for the end user the distance between you and Microsoft and the market loves to a lot of the large deals they didn't... Was that what I heard you say? and that you will start, you you also have the potential for chaos. and deliver the productivity required. (laughs) for the Cloud and be in forefront of the needs and all of the specialized needs I think as you know, I mean, Amit Zavery thank you That's it for this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

Amit ZephyrPERSON

0.99+

AmitPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Amit ZaveryPERSON

0.99+

PaulPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

24 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

JanuaryDATE

0.99+

sixQUANTITY

0.99+

millionsQUANTITY

0.99+

SecondQUANTITY

0.99+

early 2020DATE

0.99+

eighthQUANTITY

0.99+

todayDATE

0.99+

sevenQUANTITY

0.99+

last yearDATE

0.99+

SiliconANGLEORGANIZATION

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

GmailTITLE

0.98+

AppSheetTITLE

0.98+

single lineQUANTITY

0.98+

seven previous appearancesQUANTITY

0.96+

FirebaseTITLE

0.96+

eightQUANTITY

0.93+

OneQUANTITY

0.93+

one partQUANTITY

0.92+

this yearDATE

0.92+

one marketQUANTITY

0.91+

fourth-generationQUANTITY

0.91+

thirdQUANTITY

0.87+

pandemicEVENT

0.87+

Google Cloud PlatformTITLE

0.86+

G suiteTITLE

0.86+

10 yearsQUANTITY

0.86+

Google cloudORGANIZATION

0.84+

40, 50% annualQUANTITY

0.84+

threeQUANTITY

0.81+

one thingQUANTITY

0.8+

C plus plusTITLE

0.8+

Google CloudTITLE

0.79+

appsQUANTITY

0.79+

CloudTITLE

0.77+

KubernetesTITLE

0.75+

COVIDTITLE

0.75+

VPPERSON

0.73+

Oracle cloudORGANIZATION

0.71+

21DATE

0.68+

Google cloudTITLE

0.64+

Google CloudORGANIZATION

0.62+

cloudTITLE

0.61+

codeQUANTITY

0.59+

twoQUANTITY

0.58+

CubeCOMMERCIAL_ITEM

0.57+

Google CloudTITLE

0.56+

theCubeCOMMERCIAL_ITEM

0.43+

On CloudTITLE

0.4+

CloudCOMMERCIAL_ITEM

0.24+

Platform Session | HPE GreenLake Day


 

>>Hi and thanks for joining us today. I'm Arwa Qadoura, vice president of Goto Market for HP Green Lake. In this session, we're going to explore a few of the ways we're bringing the cloud to your data center and co locations, especially for your most demanding workloads. We'll show a few examples of how we do this and how we can help you with HP. Green Lake with HP Green Lake were leading the market for on premises and hybrid cloud. With a decade of experience and over 1000 customers, we've been able to continue enriching our portfolio of services, leveraging the vast input from our customers. And what we're hearing now is they want us to take on the apse and data that are most critical to run their business on our customers. Love the cloud experience and wanted available everywhere, including their data center and Coehlo H. P E. Green Lake is the cloud that comes to you. We deliver a cloud experience for your >>infrastructure and workloads in your data center or co location and at the edge. HP Greenlee Cloud Services offer consumption based economics and scalability for a wide range of platforms. All managed for you by HP or by a rich ecosystem of partners. In June, we brought the Self service point and click experience of the cloud to our new services for containers, virtual machines and ml apps, and dramatically sped up the delivery of our infrastructure services with standardized building blocks T shirt sized that you can get in ASL. It'll us 14 days and a few weeks ago we added V. D. I as a service to meet the strong demand to help your employees around the globe work securely wherever they may be. Today we will look at four examples of how we provide the cloud experience for the workloads that are most critical to run your business, and we'll give a few industry examples. First, we'll talk about helping financial institutions manage risk and compliance. We'll talk about improving health care with a secure, flexible electronic health records platform, optimizing production and delivery for manufacturing with S A P Hana and answering your biggest questions with high performance computing. When we talk about thes demanding workloads, whether we're talking about inventory management, payment processing, medical imaging or any additional ones you see here, two things typically hold true. First, they're very difficult to move to the public cloud due to the challenges around Leighton See and Performance data Gravity I P. And Privacy Protection and the data entanglement with many other APS. And secondly, they require app specific expertise to implement and integrate continual performance optimization, strong resiliency, security and compliance management. And container is a shin to achieve mobility. These air tough to meet but essential toe have. If you're betting your business on these workloads, we've helped our customers meet these challenges and requirements in the data center. Let's start our discussion about these workloads with managing risk and compliance. Risk and compliance management require analyzing huge amounts of data streaming in real time through the organization, and Splunk is widely used for this as the scales. We have found that often infrastructure is the bottleneck and organizations develop blind spots. Due to this, this means they could only see some of the data. Scaling and making changes is also a slow process with such a complex set of infrastructure, and I T resources often don't have the skills to manage new platforms such as container based implementations. We've looked at the situation and built a differentiated architecture er to solve this challenge. The solution is container based, using the HP as moral container platform. It's an infrastructure that is tuned for Splunk and resulted in a big reduction in the total servers needed. It's delivered as a service through HP Green Lake on premises fully managed to make adoption fast and to cover the skill gaps, I t may have the outcomes. We tested our approach and found the dramatic improvements you see here. Infrastructure efficiency improved dramatically, with 17 times increase in throughput and 12 Splunk indexers per host, up from one. Compliance and insights into risks improved from removing the blind spots with a 10 times reduction and infrastructure needed to ingest up to 8.7 terabytes per host per day. And customers have a greatly simplified I T operating model by moving to HP Green Lake fully managed so that HP takes care of the container and infrastructure management. Next, let's talk about improving health >>care with a secure, flexible e HR platform. The global pandemic is putting an extraordinary burden on an industry whose budgets and resources are already stretched to the limits and H P can help health systems in medical research institutions around the globe recognize the value of HP Green Lake for our infrastructure as a service needs scalable storage for high resolution medical imaging, high performance compute for medical research and v. D. I. For the digital workplace. Today we are pleased to introduce the platform for epics E H R System. This is a full platform. As a service offering for Elektronik Health Records, the service supports the epic software stack with validated HP infrastructure and epic certified expertise to run the full environment for you. This enables health care institutions toe offload the complexities of moving to and operating a modern epic platform, reducing cost risk and time with a fully managed paper use cloud service in their own data center or cola facility. Now our customers could focus on delivering life affecting healthcare outcomes and not on the nuances of daily technical operations and upgrades. So how is HP qualified? Think back to the requirements we talked about for expertise. We have a 25 year partnership with EPIC, and over 65% of epic customers use RHP infrastructure, including storage servers, software and networking. We know epic and are trusted by epic customers. We have a dedicated program management office with focused epic resources to help health care systems make the most of their epic platform improving their quality of care, financial performance, work, low efficiency and, most importantly, their patient outcomes. The next workload I'd like to cover is S a P Hana s A P Hannah runs many if not most manufacturing organizations, including our very own. Here in h P s A P finds that 70% of customers are looking to remain on premises with S A P Hana as they migrate toe s four For the reasons we discussed earlier performance, resiliency, security, I protection and control. And we're proud to be one of Aesop's most critical technology partners, running approximately 40% of the on Prem s a p customer base. Thes customers trust HP infrastructure to run their critical s a p environment and we're excited to extend the value into a fully managed on Prem Cloud service. Today we bring the cloud benefits of HP Green Lake toe s a P Hannah customers on premises in two ways. Standard hp Green Lake uses S a P certified technology from HP with the scalable paper use model with H P's outstanding support and management services ready to meet the demanding requirements of S A P. Hana. And now we are working with S a P for the S A P Hana Enterprise Cloud Customer Edition which is powered by HP Green Lake and fully managed by S A P for you, which is the sap cloud in your data center. HPD point next services are essential to our customers. One of the reasons that customers choose HP for workloads such as SAP is our expertise from strategy all the way to operation with advisory and professional services specific to your application. We help you succeed. HP understands migration toe s A. P s four hana and as the leading technology vendor of S a P Hannah Infrastructure and a large s a p Hannah customer ourself, we have the expertise within our advisory and professional services. To ensure your success as you move to s four, HP has delivered over 1500 s, a p Hana consulting projects and HP point Next services has the expertise globally to accelerate time to value and mitigate your risk. And lastly, HP offers a center of excellence Experience for S a P. Hannah providing specialized support from our experts Toe optimize operations for S a p environments The last and maybe the most demanding workload that will cover today is HPC high performance computing. Today we are announcing H p e Green Lake for HPC. This is an exciting time as we bring our cloud services to HPC wherever you need it. As the leader in HPC, we have significant i p To give HPC customers. We offer the speed and scalability that you need with components such as high speed interconnect, high density compute platforms and software to manage HPC operations and performance. And unlike other technology companies, thes are all from HP, fully integrated, fully supported and can be fully managed by HP. And we've built an ecosystem of I S V applications that we closely collaborate with to make HPC run seamlessly high. Performance computing can get complex with HP. Green Lake for HPC will simplify the approach without taking away any of the power. Pick the starting point that fits your use case small, medium or large, and get started. These building blocks are HPC optimized, meaning you could bring the technology that we use to predict weather or decode the human genome to your everyday APS. No capital up front, pay for what you use and the implementation is managed for you. With our building block approach, we can eliminate the long design and implementation phase, which could take months or even a year over time as your clusters grow, modernize and change H p e Green Lake Capacity management helps you always have capacity ready ahead of your needs. What is the experience with H. P Green Lake for HPC, you order, we deliver in as little as 14 days. We install your systems and you can quickly deploy your HPC APS. With the new point and click service experience, researchers and analysts can get access to their HPC cluster resources from the self service portal without putting. I t in the middle of every request we manage the clusters for you. Take care of upgrades, performance and growth, and you pay based on what you use. Simplifying HPC economics and operations. This is how we bring a cloud to your most demanding workloads. So we've covered a lot, and the big question is, so what? How do you benefit analysts have found that with HP Green Lake, you save 30 to 40% on total cost of ownership by eliminating over provisioning, which on its own is huge. But the additional benefits are equally important to our customers. You can speed deployments of projects by 75% cut your risk with 85% less unplanned downtime and improve ICTY productivity by 40% due to the services, including that greatly simplify I t operations. What's next? If you want to learn more about how we bring cloud services for your most demanding workloads, whether they're for risk management, E H. R s, a, p or HPC, or for other workloads you depend on us for Please engage your HP account team or your HP partner. If you're already are a customer for HP Green Lake, thank you. And we're ready to globally help you with your next project. And, of course, please visit us at p e dot com. Backslash Green Lake Thanks for joining me today.

Published Date : Dec 4 2020

SUMMARY :

bringing the cloud to your data center and co locations, especially for your most and I T resources often don't have the skills to manage new platforms What is the experience with H. P Green Lake for HPC, you order,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Arwa QadouraPERSON

0.99+

30QUANTITY

0.99+

EPICORGANIZATION

0.99+

HPORGANIZATION

0.99+

17 timesQUANTITY

0.99+

JuneDATE

0.99+

40%QUANTITY

0.99+

75%QUANTITY

0.99+

10 timesQUANTITY

0.99+

FirstQUANTITY

0.99+

85%QUANTITY

0.99+

TodayDATE

0.99+

70%QUANTITY

0.99+

HP Green LakeORGANIZATION

0.99+

12 SplunkQUANTITY

0.99+

14 daysQUANTITY

0.99+

AesopORGANIZATION

0.99+

todayDATE

0.98+

over 65%QUANTITY

0.98+

S A P HanaORGANIZATION

0.98+

secondlyQUANTITY

0.98+

over 1500 sQUANTITY

0.97+

H PORGANIZATION

0.97+

two thingsQUANTITY

0.97+

approximately 40%QUANTITY

0.97+

h P s A PORGANIZATION

0.97+

over 1000 customersQUANTITY

0.97+

OneQUANTITY

0.96+

pandemicEVENT

0.94+

two waysQUANTITY

0.94+

HPDORGANIZATION

0.94+

SplunkORGANIZATION

0.92+

Goto MarketORGANIZATION

0.92+

oneQUANTITY

0.89+

HP Green LakeORGANIZATION

0.88+

S A PORGANIZATION

0.86+

S a P. HannahORGANIZATION

0.86+

hp Green LakeORGANIZATION

0.85+

few weeks agoDATE

0.84+

25 year partnershipQUANTITY

0.84+

Greenlee Cloud ServicesCOMMERCIAL_ITEM

0.83+

Green LakeORGANIZATION

0.82+

PremORGANIZATION

0.82+

S a P Hana s A P HannahORGANIZATION

0.82+

HPCORGANIZATION

0.79+

Coehlo H. P E. Green LakeORGANIZATION

0.77+

Green LakeCOMMERCIAL_ITEM

0.76+

Leighton SeeORGANIZATION

0.74+

RHPORGANIZATION

0.73+

HannahCOMMERCIAL_ITEM

0.72+

up to 8.7 terabytes per hostQUANTITY

0.72+

HPE GreenLake DayEVENT

0.71+

GreenCOMMERCIAL_ITEM

0.7+

LakeORGANIZATION

0.68+

S A P Hana Enterprise Cloud CustomerCOMMERCIAL_ITEM

0.68+

LakeCOMMERCIAL_ITEM

0.66+

a yearQUANTITY

0.62+

H. P GreenORGANIZATION

0.62+

V. D.TITLE

0.61+

Elektronik Health RecordsORGANIZATION

0.61+

HanaORGANIZATION

0.59+

Platform for Photonic and Phononic Information Processing


 

>> Thank you for coming to this talk. My name is Amir Safavi-Naeini I'm an Assistant Professor in Applied Physics at Stanford University. And today I'm going to talk about a platform that we've been developing here that allows for quantum and classical information processing using photons and phonons or mechanical motion. So first I'd like to start off, with a picture of the people who did the work. These are graduate students and postdocs in my group. In addition, I want to say that a lot of the work especially on polling of the Lithium niobate was done in collaboration with Martin Fejer's group and in particular Dr.Langrock and Jata Mishra and Marc Jankowski Now our goal is to realize a platform, for quantum coherent information processing, that enables functionality which currently does not exist in other platforms that are available. So in particular we want to have, a very low loss non-linearity that is strong and can be dispersion engineered, to be made broadband. We'd like to make circuits that are programmable and reconfigurable, and that necessitates having efficient modulation and switching. And we'd also really like to have a platform that can leverage some of the advances with superconducting circuits to enable sort of large scale programmable dynamics between many different oscillators on a chip. So, in the next few years what we're really hoping to demonstrate are few photon, optical nonlinear effects by pushing the strength of these non-linearities and reducing the amount of loss. And we also want to demonstrate these coupled, sort of qubit and many oscillators systems. Now the Material system, that we think will enable a lot of these advances is based on lithium niobate, so lithium niobate is a fair electric crystal. It's used very widely in optical components and in acousto optics and then surface acoustic wave devices. It's a fair electric crystal, that has sort of a built-in polarization. And that enables, a lot of effects, which are very useful including the piezoelectric effect, electro- optic effects. And it has a very large K2 optical non-linearity. So it allows for three wave mixing. It also has some effects that are not so great for example, pyroelectricity but because it's very, established material system there's a lot of tricks on how to deal with some of the less attractive parts of it of this material. Now most, Surface Acoustic Wave, or optical devices that you would find are based on kind of bulk lithium niobate crystals that either use surface acoustic waves that propagate on a surface or, you know, bulk waves propagating through a whole crystal, or have a very weak weakly guided low index contrast waveguide that's patterned in the lithium niobate. This was the case until just a little over a decade ago. And this work from ETH Zurich came showing that thin-film lithium niobate can be, bonded and patterned. And Photonic circuits very similar to assigning circuits made from three fives or Silicon can be implemented in this material system. And this really led to a lot of different efforts from different labs. I would say the major breakthrough came, just a few years ago from Marko Loncar, where they demonstrate that high quality factors are possible to realize in this platform. And so they showed resonators with quality factors in the tens of billions corresponding to, line widths of tens of megahertz or losses of, just a few, DB per meter. And so that really changed the picture and you know a little bit after that in collaboration with Martin Fejer's group at Stanford they were able to demonstrate polling and so very large this version engineered nonlinear effects and these types of waveguides. And, and so that showed that, sort of very new types of circuits can be possible on this platform Now our approach is very similar. So we have a thin film of lithium niobate and this time it's on Sapphire instead of oxide or some polymer. and sometimes we put oxide on top. Some Silicon oxide on top, and we can also put electrodes these electrodes can be made out of a superconductor like niobium or aluminum or they can be gold depending on what we're trying to do. The sort of important thing here is that the large index contrast means that, light is guided in a very highly confined waveguide. And it supports bends with small bending radii. And that means we can have resonators that are very small. So the mode volume for the photonic resonators can be very small and as is well known. The interaction rate scale is, one over squared of mode volume. And so we're talking about an enhancement of around six orders of magnitude in the interaction length interaction lengths, over systems using sort of bulk components. And this is in a circuit that's sort of sub millimeter in size and its made on this platform. Now interaction length is important but also quality factor is very important. So when you make these things smaller you don't want to make them much less here. That's, you know, you can look at, for example a second harmonic generation efficiency in these types of resonances and that scales as Q, to the power of three essentially. So you need to achieve, you win a lot by going to low loss circuits. Now loss and non-linearity or sort of material and waveguide properties that we can engineer, but design of these circuits, careful design of these circuits is also very important. For example, you know, because these are highly confined waves and dielectric wave guides they can, you can support several different orders of modes especially if you're working for a broad band light waves that span, you know, an octave. And now when you try to couple light in and out of these structures, you have to be very careful that you're only picking up the polarizations that you care about, and you're not inducing extra loss channels effectively reducing the queue, even though there's no material loss if you're these parasitic coupling, can lead to lower Q. so the design is very important. This plot demonstrates, you know, the types of extrinsic to intrinsic coupling that are needed to achieve very high efficiency SHG, which is unrelated to optical parametric oscillation. And, you know, you, so you sort of have to work in a regime where the extrinsic couplings are much larger than the intrinsic couplings. And this is generally true for any type of quantum operation that you want to do. So just just low material loss itself isn't enough to design is also very important. In terms of where we are, on these three important aspects like getting large G large Q and large cap up. So we've been able to achieve high Q in, in these structures. This is a Q a of a couple million, we've also been able to you can see from a broad transmission spectrum through a grading coupler you can see a very evenly spaced modes showing that we're only coupling to one mode family. And we can see that the depth of the modes is also very large, you know, 90% or more. And that means that our extrinsic coupling in intrinsic coupling is also very large. So we've been able to kind of engineer these devices and to achieve this in terms of the interaction, I won't go over it too much but, you know, in collaboration with Marty Feres group we were able to pull both lithium niobate on insulator and lithium niobate on Sapphire. We'll be able to see a very efficient, sort of high slope proficiency second harmonic generation, you know achieving approaching 5000% per watt centimeters squared for 1560 to 780 conversion. So this is all work in progress. And so for now, I'd like to talk a little bit about the integration of acoustic and mechanical components. So, first of all why would we want to integrate mechanical components? Well, there's lots of cases where, for example you want to have an extremely high extinction switching functionality. That's very difficult to do with electro optics because they need to control the phase, extremely efficiently with extreme precision. You would need very large, long resonators and or large voltages becomes very difficult to achieve you know, 60 DB types of, switching. Mechanical systems. On the other hand, they can have very small mode volumes and can give you 60 DB switching without too many complications. Of course the drawback is that they're slower, but for a lot of applications, that doesn't matter too much. So in terms of being able to make integrate memes, switching and tuning with this platform, here's a device that achieves that so that each of these beams is actuated through the Piezoelectric effect and lithium niobate via this pair of electrodes that we put a voltage across. And when you put a voltage across these have been designed to leverage one of the off diagonal terms in the piezoelectric tensor, which causes bending. And so this bending generates a very large displacement in the center of this beam, in this beam, you might notice is composed of a grading, and this grading effectively generates it's photonic crystal cavity. So it generates a localize optical mode in the center which is very sensitive to these displacements. And what we're able to see in this system is that you know, just a few millivolts so 50 millivolts here shifts the resonance frequency by much more than a line width just a few millivolts is enough to shift by a line width. And so to achieve switching we can also tune this resonance across the full telecom band and these types of devices whether in waveguide resonator form can be extremely useful for sort of phase control in a large scale system, where you might want to have many many face switches on a chip to control phases with, with low loss, because these wave guides are shorter. You have lower loss propagating across them. Now, these interactions are fairly low frequency. When we go to higher frequency, we can use the electro-optic effect. And even the electro-optic effect even though it's very widely used, and well-known on a Photonic circuit like these lithium Niobate for tying circuits has, interesting consequences and device opportunities that don't exist on the bulk devices. So for example, let's look at single sideband modulation. This is what an electro-optic sort of standard electro optics, single sideband modulator looks like you, you take your light, you split into two parts, and then you modulate each of these arms. You modulate them out of phase with an RFC tone that's out of phase. And so now you generate side bands on both and now because they're modulating out of phase when they are recombined and on the output splitter and this mock sender interferometer you end up dropping one of the side bands and then the pump and you end up with a shifted side pan. So that's possible you can do single side band modulation with an electronic device but the caveat is that this is now fundamentally lossy. So, you know, you have generated, this other side band via modulation, and the sideband is simply being lost due to interference. So it's their, It's getting combined, it's getting scattered away because there's no mode that it can get connected to. So actually you know, this is going kind of an efficiency less than 3DB usually much less than 3DB. And that's fine if you just have one of these single sideband modulators because you can always amplify, you can send more power but if you're talking about a system and you have many of these and you can't put amplifiers everywhere then, or you're working with quantum information where loss is particularly bad. This is not an option. Now, when you use resonators, you have another option. So here's a device that tries to demonstrate this. This is two resonators that are brought into the near-field of each other. So they're coupled with each other over here where they're, which causes a splitting. And now when we tune the DC voltage was tuned one of these resonators by sort of changing the effective half lengths And one of these resonators tunes, the frequency, we can see an We should see an anti crossing between the two modes and at the center of this splitting this is versus voltage, a splitting at the center at this voltage, let's say here it's around 15 volts. We can see two residences two dips, when we probed the line field going through. And now if we send in the pump resonant, with one of these, and we modulate at this difference frequency we generate this red side band but we actually don't generate the blue side band because there's no optical density of state. So the, so because there's this other side may has just not generated. This system is now much more efficient. In fact, so in Marco Loncar has give they've demonstrated. You can get a hundred percent conversion. And we've also demonstrated this in a similar experiment showing that you can get very large sideband suppression. So, you know more than 30 DB suppression of the side bands with respect to the sideband that you care about It's also interesting that these interactions now preserve quantum coherence. And this is one path to creating links between superconducting microwave systems and optical components. Because now the microwave signal that's scattered here preserves its coherence. So we've also been able to do acoustic optic interactions at these high frequencies. This is a, this is an acoustic optic modulator that operates at a few gigahertz. Basically you generate electric field here which generates a propagating wave inside this transducer made out of lithium niobate. These are aluminum electrodes on top. The phonons are focused down into a small phononic waveguides that guides mechanical waves. And then these are brought into this crystal area where the sound and the Mo and the light are both convert confined to wavelength skill mode volume and they interact very strongly with each other. And the strong interaction leads to very efficient, effective electro-optic modulation. So here we've been able to see, with just a few microwatts of power, many, many side bands being generated. So this is a fact that they like tropic much later where the VPI is, a few thousands of a volt instead of, you know, several volts, which is sort of the off the shelf, electro-optic modulator that you would find. And importantly, we've been able to combine these, photonic and phononic circuits into the same platform. So this is a lithium niobate on same Lithium niobate on Sapphire platform. This is an acoustic transducer that generates mechanical waves that propagate in this lithium niobate waveguide. You can see them here and we can make phononic circuits now. so this is a ring resonate. It's a ring resonator for phonons. So we send sound waves through. And when it's resonance, when its frequency hits the ring residences, we see peaks. and this is, this is cheeks in the drop port coming out. And what's really nice about this platform is that we actually don't need to unlike unlike many memes platforms where you have to have released steps that are usually not compatible with, you know other devices here, there's no release steps. So the phonons are guided in that thin lithium niobate layer. The high Q of these mechanical modes shows that these mechanical resonances can be very coherent oscillators. And so we've also worked towards integrating these with very non-linear microwave circuits to create strongly interacting phonons and phonon circuits. So this is a example of an experiment we did over a year ago, where we have sort of a superconducting Qubit circuit with mechanical resonances made out of lithium niobate shunting the Qubit capacitor to ground. So now vibrations of this mechanical oscillator generate a voltage across these electrodes that couples to the Qubits voltage. And so now you have an interaction between this qubit and the mechanical oscillator, and we can see that in the spectrum of the qubit as we tune it across the frequency band. And we see splittings every time the qubit frequency approaches the mechanical resonance frequency. And infact this coupling is so large, that we were able to observe for the first time, the phonon spectrum. So we can detune this qubit away from the mechanical resonance. And now you have a dispersive shift on the qubit which is proportional to the number of phonons. And because number of photons is quantized. We can actually see, the different phonon levels in the qubit spectrum. Moving forward, we've been trying to, also understand what the sources of loss are in the system. And we've been able to do this by demonstrating by fabricating very large rays in these mechanical oscillators and looking at things like, their quality factor versus frequency. This is an example of a measurement that shows a jump in the quality factor when we enter the frequency band where we expect our phononic band gap for this period, periodic material is this jump you know, in principle,if loss were only due to clamping only due to acoustic waves leaking out in these out of these ends, then this change in quality factor quality factor should go to essentially infinite or should be ex exponential losses should be exponentially suppress with the length So these, but it's not. And that means we're actually limited by other loss channels. And we've been able to determine that these are two level systems and the lithium niobate by looking at the temperature dependence of these losses and seeing that they fit very well sort of standard models that exist for the effects of two level systems on microwave and mechanical resonances. We've also started experimenting with different materials. In fact, we've been able to see that, for example, going to lithium niobate, that's dope with magnesium oxide changes or reduces significantly the effect of the two level systems. And this is a really exciting direction of research that we're pursuing. So we're understanding these materials. So with that, I'd like to thank the sponsors. NTTResearch, of course, a lot of this work was funded by DARPA, ONR, RAO, DOE very generous funding from David and Lucile Packard foundation and others that are shown here. So thank you.

Published Date : Sep 24 2020

SUMMARY :

And so that really changed the picture and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc JankowskiPERSON

0.99+

90%QUANTITY

0.99+

Amir Safavi-NaeiniPERSON

0.99+

Jata MishraPERSON

0.99+

60 DBQUANTITY

0.99+

5000%QUANTITY

0.99+

50 millivoltsQUANTITY

0.99+

Marko LoncarPERSON

0.99+

two resonatorsQUANTITY

0.99+

two modesQUANTITY

0.99+

first timeQUANTITY

0.99+

DARPAORGANIZATION

0.99+

Marco LoncarPERSON

0.99+

ONRORGANIZATION

0.99+

ETH ZurichORGANIZATION

0.99+

one modeQUANTITY

0.99+

two partsQUANTITY

0.98+

1560QUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

more than 30 DBQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

StanfordORGANIZATION

0.98+

Dr.LangrockPERSON

0.97+

Martin FejerPERSON

0.97+

NTTResearchORGANIZATION

0.97+

tens of billionsQUANTITY

0.97+

hundred percentQUANTITY

0.97+

one pathQUANTITY

0.97+

two levelQUANTITY

0.97+

lithium niobateOTHER

0.96+

two residencesQUANTITY

0.96+

RAOORGANIZATION

0.96+

firstQUANTITY

0.95+

Lucile PackardORGANIZATION

0.95+

two dipsQUANTITY

0.95+

around 15 voltsQUANTITY

0.95+

secondQUANTITY

0.94+

less than 3DBQUANTITY

0.93+

Stanford UniversityORGANIZATION

0.93+

DOEORGANIZATION

0.92+

threeQUANTITY

0.91+

over a year agoDATE

0.9+

over a decade agoDATE

0.9+

lithiumOTHER

0.84+

few years agoDATE

0.83+

three important aspectsQUANTITY

0.82+

Marty FeresPERSON

0.82+

less than 3DBQUANTITY

0.81+

couple millionQUANTITY

0.81+

tens of megahertzQUANTITY

0.81+

two level systemsQUANTITY

0.8+

around six ordersQUANTITY

0.79+

DavidORGANIZATION

0.74+

single sidebandQUANTITY

0.73+

780 conversionQUANTITY

0.72+

singleQUANTITY

0.7+

few thousands of a voltQUANTITY

0.66+

K2OTHER

0.65+

resonatorsQUANTITY

0.59+

next few yearsDATE

0.59+

SiliconOTHER

0.58+

niobiumOTHER

0.58+

few microwattsQUANTITY

0.58+

fewQUANTITY

0.55+

several voltsQUANTITY

0.53+

Lithium niobateOTHER

0.53+

PiezoelectricOTHER

0.53+

niobateOTHER

0.52+

meterQUANTITY

0.51+

fivesQUANTITY

0.45+

SapphireCOMMERCIAL_ITEM

0.41+

NiobateOTHER

0.36+

ON DEMAND R AND D DATA PLATFORM GSK FINAL2


 

>>Hey, everyone, Thanks for taking them to join the story. Hope you and your loved ones are safe during these tough times. Let me start by introducing myself. My name is Michelle. When I walk for GlaxoSmithKline, GSK as an engineering manager in my current role, A little protocol platform A P s, which is part of the already data platform here in G S, K R and D Tech. I live in Dallas, Texas. I have a Masters degree in computer science on a bachelor's in electronics and communication engineering. I started my career as a software developer on over these years again a lot of experience in leading and building, not scale and predicts products and solutions. I also have a complete accountability for container platforms here at GSK or any tick. I've been working very closely with Dr Enterprise, which is no Miranda's for more than three years to enable container platforms that yes, came on mainly in our own Itek. So that's me. Let >>me give you a quick overview on agenda for today's talk. I'll start with what we do here at GSK on what is RND data platform. Then I'll give you an overview on What are the business drivers that >>motivated US toe? Take this container Germany on some insight into learnings on accomplishments over these years. Working with Dr Enterprise on the container platforms Lately, you must have seen a lot of articles off there which talk about how ts case liberating technologies like artificial intelligence, mission learning, UN data and analytics for the Douglas Corey process. I'm very excited to see the progress we have made in technology, but what makes us truly unique is our commitment to the patient. >>We're G escape, help millions of people, do more, feel better and live longer. Wear a global company that is focused on three were tickles pharmaceuticals vaccines on consumer healthcare. Our main intent is to lower the >>burden on the impact of diseases on the patients. Here at GSK, we allow science to drive the technology. This helps us toe build innovative products. That's helps our scientists to make better and faster additions throughout the drug discovery by plane. >>With that, let me give you some >>context on what currently data platform is how it is enabled. A T escape started in mid 2016. What used to be called us are any information platform whose main focus was to centralize curate on rationalized all the data produced within the others are in the business systems in orderto drive, a strategic business value, standardization of clinical trials, Genome Wide Association Study Analysis, also known as Jesus Storage and Crossing Off Rheal. World Evidence data some of the examples off how the only platform was used to deliver the business value four years later. No, a new set off business rivals of changing our landscape. The irony Information Platform is evolving to be a hybrid, multi cloud solution and is known as already did a platform refering to 20 >>19 GSK's annual report. These are the four teams that there are any platform will be mainly focused on. We're expanding our data capabilities to support the use. Escape by a former company on evolving into a hybrid medical platform is one of the many steps that we're taking to be future ready. Our key focus will still be making >>greater recommendations better and faster by using that wants us. We're making the areas like artificial intelligence and machine learning. No doc brings us toe. What is Germany is important. Why are we taking this German with that? Let me take you to the next topic off. Like the process of discovery, Francisco is not an easy process. Talking about the recent events occurred over the last few months on the way. How all our lives are impacted. It is a lot of talk on information going about. Why did drug discovery process is so tough working for a global health care company? I get asked this question very frequently. From many people I interact with. Question is like, Why is that? This car is so tough on why it takes so much time. Drug discovery is a complex process that involves multiple different stages on at each and every stage. There is huge amounts of data that the scientists have took process to make some decisions. Studies have shown that only 3% off small molecules entering the human studies actually become medicines. If you're new to drug discovery, you may ask, like what is the targets? Targets so low? We humans are very complex species, >>not going into the details of the process. We're G escape >>have made a lot of investments into technology that enabled us to make data river conditions. Throw the drug Discovery pipeline >>as we implement. As we started implementing these tools and technologies to enable already did a platform, we started to get a better appreciation off how these tools in track on integrate >>with each other. Our goal wants to make this platform a jail, the platform that can work at scale so that we can provide a great user experience and contribute back to the bread discovery pipeline so that the scientists can make faster editions. We want our ardently users to consume the data, and the service is available on the platform seamlessly in a self service fashion. And we also have to accomplish this by establishing trust. And then we have to end also enable the academic partnerships, acquisitions, collaborations that DSK has, which actually brings a lot of data on value to our scientists. So when we talk about so many collaborations and a lot of these systems, what this brings in is wide range off systems and platforms that are fundamentally built on different infrastructure. This is where Doctor comes into fiction on our containers significance. >>We have realized the power of containers on how we can simplify this complex ecosystem by using containers and provide a faster access off data to war scientists who didn't go >>back and contribute back to the drug discovery by play. >>With that, let me take talk to you about >>the containers journey and she escaped. So we started our container journey in late 2017. We started working with Dr Enterprise to enable the container platform. This is on our on prem infrastructure Back then, or first year or so we walked through multiple Pelosis did a lot of testing to make sure our platform is stable before we onboard either the data or the user applications. I was part of this complete journey on Dr Stream has worked with us very closely towards you. The first milestone off establishing a stable container platform. A tsk. Now, getting into 2019 we started deploying our applications in production environment. I cannot go into the details of what this Absar, but they do include both data pipelines as well as Web services. You know, initial days we have worked a lot on swamp, but in 2019 is when we started looking into communities in the same year, we enable kubernetes orchestration on the doctor and replace platform here at GSK and also made it as a de facto orchestra coming into 2020. All our micro service applications are undead. A pipelines are migrated to the container platforms on all of these are orchestrated by Cuban additional on these air applications that are running in production. As of today, we have made the container forced approach as an architectural standard across already taking GSK. We also started deploying our AML training models onto containers on All this work is happening on our Doctor Enterprise platform. Also as part off are currently platforms hybrid multicolored journey. We started enabling container and kubernetes based platforms on public clubs. Now going into 2021 on future. Enabling our RND users to easily access data and applications in a platform agnostic way is very crucial for our success because previously we had only onto him. Now we have public clothes that are getting involved on One of >>the many steps we're taking through this journey is to >>watch allies the data on ship data and containers or kubernetes volumes on demand to our our end users of scientists. And this allows us to deliver data to our scientists wherever they want in a very security on. We're leveraging doctor to do it. So that's >>our future. Learning on with that, let's take a deep dive into fuel for >>our accomplishments over these years. I want to start with a general demand and innovative one very interesting use case that we developed on Dr. This is a rapid prototyping capability that enabled our scientists seamlessly to Monday cluster communication. This was one off the biggest challenges which way his face for a long time and with the help of containers, were able to solve this on provide this as a capability to our scientists. We actually have shockers this capability in one of the doctor conferences before next. As I've said before, by migrating all over web services into containers, we not only achieved horizontal scalability for those specific services, but also saved more than 50% in support costs for the applications which we have migrated by making Docker image as an immutable artifact In our bill process, we are now able to deploy our APS or models in any container or Cuban, its base platform, either in on Prem or in a public club. We also made significant improvements towards the process. A not a mission By leveraging docker containers, containers have played a significant role in keeping US platform agnostic and thus enabling our hybrid multi cloud Germany valuable for out already did scientists. As I mentioned before, data virtualization is another viewpoint we have in terms off our next steps off where we want to take kubernetes on where we wanna leverage open it. Us. What you see here are just a few off many accomplishments which we have our, um, achieved by using containers for the past three years or so. So with that before I close all the time and acknowledge all our internal partners who has contributed a lot to this journey mainly are in the business are on the deck on the broader take. Organizations that escape also want to time document present Miranda's for being such a great partner throughout this journey and also giving us an opportunity to share this success story today. Lastly, thanks for everyone to listening to the stop and please feel free to reach out. If you have any questions or suggestions, let's be fit safe. Thank you

Published Date : Sep 14 2020

SUMMARY :

Hey, everyone, Thanks for taking them to join the story. What are the business drivers that our commitment to the patient. Our main intent is to lower the burden on the impact of diseases on the patients. World Evidence data some of the examples off how the only platform was evolving into a hybrid medical platform is one of the many steps that we're taking to be There is huge amounts of data that the scientists have took process to not going into the details of the process. have made a lot of investments into technology that enabled us to make data river conditions. enable already did a platform, we started to get a better appreciation off how these And then we have to end also enable the academic partnerships, I cannot go into the details of what this Absar, but they do include both data pipelines We're leveraging doctor to do it. Learning on with that, let's making Docker image as an immutable artifact In our bill process, we are now able to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DSKORGANIZATION

0.99+

MichellePERSON

0.99+

2019DATE

0.99+

2020DATE

0.99+

GSKORGANIZATION

0.99+

late 2017DATE

0.99+

2021DATE

0.99+

G SORGANIZATION

0.99+

threeQUANTITY

0.99+

mid 2016DATE

0.99+

K RORGANIZATION

0.99+

MondayDATE

0.99+

more than 50%QUANTITY

0.99+

D TechORGANIZATION

0.99+

Dallas, TexasLOCATION

0.99+

four teamsQUANTITY

0.99+

more than three yearsQUANTITY

0.98+

GlaxoSmithKlineORGANIZATION

0.98+

four years laterDATE

0.98+

USLOCATION

0.98+

todayDATE

0.98+

first milestoneQUANTITY

0.98+

Dr StreamORGANIZATION

0.97+

millions of peopleQUANTITY

0.97+

3%QUANTITY

0.97+

oneQUANTITY

0.96+

MirandaPERSON

0.95+

GermanyLOCATION

0.94+

20QUANTITY

0.94+

ItekORGANIZATION

0.93+

both data pipelinesQUANTITY

0.92+

Dr EnterpriseORGANIZATION

0.92+

FranciscoPERSON

0.88+

MirandaORGANIZATION

0.84+

eachQUANTITY

0.82+

CubanOTHER

0.82+

G escapeORGANIZATION

0.78+

first yearQUANTITY

0.75+

OneQUANTITY

0.74+

lastDATE

0.72+

past three yearsDATE

0.71+

monthsDATE

0.7+

Crossing Off RhealTITLE

0.68+

GSKTITLE

0.67+

GermanOTHER

0.65+

Douglas CoreyPERSON

0.62+

same yearDATE

0.59+

CubanLOCATION

0.56+

Wide AssociationTITLE

0.55+

Jesus StorageTITLE

0.55+

RORGANIZATION

0.5+

19 GSKQUANTITY

0.5+

GenomeORGANIZATION

0.48+

DoctorTITLE

0.45+

PelosisLOCATION

0.42+

ON DEMAND BUILDING MULTI CLUSTER CONTAINER PLATFORM SPG FINAL 2


 

>> Hello, everyone. I'm Khalil Ahmad, Senior Director, Architecture at S&P Global. I have been working with S&P Global for six years now. Previously, I worked for Citigroup and Prudential. Overall, I have been part of IT industry for 30 years, and most of my professional career has been within financial sector in New York City metro area. I live in New Jersey with my wife and son, Daniel Khalil. I have a Master degree in software engineering from the University of Scranton, and Master in mathematics University of Punjab, Lahore. And currently I am pursuing TRIUM global Executive MBA. A joint program from the NYU Stern, LSE and HEC Paris. So today, I'm going to talk about building multi-cluster scalable container platform, supporting on-prem hybrid and multicloud use cases, how we leverage that with an S&P Global and what was our best story. As far as the agenda is concerned, I will go over, quickly the problem statement. Then I will mention the work of our core requirements, how we get solutioning, how Docker Enterprise helped us. And at the end, I will go over the pilot deployment for a proof of concept which we leverage. So, as far as the problem statement is concerned. Containers, as you all know, in the enterprise are becoming mainstream but expertise remains limited and challenges are mounting as containers enter production. Some companies are building skills internally and someone looking for partners that can help catalyze success, and choosing more integrated solutions that accelerate deployments and simplify the container environment. To overcome the challenges, we at S&P Global started our journey a few years back, taking advantage of both options. So, first of all, we met with all the stakeholder, application team, Product Manager and we define our core requirements. What we want out of this container platform, which supports multicloud and hybrid supporting on-prem as well. So, as you see my core requirements, we decided that we need first of all a roadmap or container strategy, providing guidelines on standards and specification. Secondly, with an S&P Global, we decided to introduce Platform as a Service approach, where we bring the container platform and provide that as a service internally to our all application team and all the Product Managers. Hosting multiple application on-prem as well as in multicloud. Third requirement was that we need Linux and Windows container support. In addition to that, we would also require hosted secure image registry with role based access control and image security scanning. In addition to that, we also started DevOps journey, so we want to have a full support of CI/CD pipeline. Whatever the solution we recommend from the architecture group, it should be easily integrated to the developer workstation. And developer workstation could be Windows, Mac or Linux. Orchestration, performance and control were few other parameter which we'll want to keep in mind. And the most important, dynamic scaling of container clusters. That was something we were also want to achieve, when we introduce this Platform as a Service. So, as far as the standard specification are concerned, we turn to the Open Container Initiative, the OCI. OCI was established in June 2015 by Docker and other leaders in the technology industry. And OCI operates under Linux Foundation, and currently contains two specification, runtime specification and image specification. So, at that time, it was a no brainer, other than to just stick with OCI. So, we are following the industry standard and specifications. Now the next step was, okay, the container platform. But what would be our runtime engine? What would be orchestration? And how we support, in our on-prem as well as in the multicloud infrastructure? So, when it comes to runtime engine, we decided to go with the Docker. Which is by default, runtime engine and Kubernetes. And if I may mention, DataDog in one of their public report, they say Docker is probably the most talked about infrastructure technology for the past few years. So, sticking to Docker runtime engine was another win-win game and we saw in future not bringing any challenge or issues. When it comes to orchestration. We prefer Kubernetes but that time there was a challenge, Kubernetes did not support Windows container. So, we wanted something which worked with a Linux container, and also has the ability or to orchestrate Windows containers. So, even though long term we want to stick to Kubernetes, but we also wanted to have a Docker swarm. When it comes to on-prem and multicloud, technically you could only support as of now, technology may change in future, but as of now, you can only support if you bring your own orchestration too. So, in our case, if we have control over orchestration control and not locked in with one cloud provider, that was the ideal situation. So, with all that, research, R&D and finding, we found Docker Enterprise. Which is securely built, share and run modern applications anywhere. So, when we come across Docker Enterprise, we were pleased to see that it meets our most of the core requirements. Whether it is coming on the developer machine, to integrating their workstation, building the application. Whether it comes to sharing those application, in a secure way and collaborating with our pipeline. And the lastly, when it comes to the running. If we run in hybrid or multicloud or edge, in Kubernetes, Docker Enterprise have the support all the way. So, three area one I just call up all the Docker Enterprise, choice, flexibility and security. I'm sure there's a lot more features in Docker Enterprise as a suite. But, when we looked at these three words very quickly, simplified hybrid orchestration. Define application centric policies and boundaries. Once you define, you're all set. Then you just maintain those policies. Manage diverse application across mixed infrastructure, with secure segmentation. Then it comes to secure software supply chain. Provenance across the entire lifecycle of apps and infrastructure through enforceable policy. Consistently manage all apps and infrastructure. And lastly, when it comes to infrastructure independence. It was easily forever lift and shift, because same time, our cloud journey was in the flight. We were moving from on-prem to the cloud. So, support for lift and shift application was one of our wishlist. And Docker Enterprise did not disappoint us. It also supported both traditional and micro services apps on any infrastructure. So, here we are, Docker Enterprise. Why Docker Enterprise? Some of the items in previous slides I mentioned. But in addition to those industry-leading platform, simplifying the IT operations, for running modern application at scale, anywhere. Docker Enterprise also has developer tools. So, the integration, as I mentioned earlier was smooth. In addition to all these tools, the main two components, the Universal Control Plane and the Docker Trusted Registry, solve lot of our problems. When it comes to the orchestration, we have our own Universal Control Plane. Which under the hood, manages Kubernetes and Docker swarm both clusters. So, guess what? We have a Windows support, through Docker swarm and we have a Linux support through Kubernetes. Now that paradigm has changed, as of today, Kubernetes support Windows container. So, guess what? We are well after the UCP, because we have our own orchestration tool, and we start managing Kubernetes cluster in Linux and introduce now, Windows as well. Then comes to the Docker Trusted Registry. Integrated Security and role based access control, made a very smooth transition from our RT storage to DTR. In addition to that, binary level scanning was another good feature from the security point of view. So that, these all options and our R&D landed the Docker Enterprise is the way to go. And if we go over the Docker Enterprise, we can spin up multiple clusters on-prem and in the cloud. And we have a one centralized location to manage those clusters. >> Khalil: So, with all that, now let's talk about how what was our pilot deployment, for proof of concept. In this diagram, you can see we, on the left side is our on-prem Data Center, on the right side is AWS, US East Coast. We picked up one region three zones. And on-prem, we picked up our Data Center, one of the Data Center in the United States of America, and we started the POC. So, our Universal Control Plane had a five nodes cluster. Docker Trusted Registry, also has a five node cluster. And the both, but in our on-prem Data Center. When it comes to the worker nodes, we have started with 18 node cluster, on the Linux side and the four node cluster on the Windows side. Because the major footprint which we have was on the Linux side, and the Windows use cases were pretty small. Also, this is just a proof of concept. And in AWS, we mimic the same web worker nodes, virtual to what we have on-prem. We have a 13 nodes cluster on Linux. And we started with four node cluster of Windows container. And having the direct connect from our Data Center to AWS, which was previously existing, so we did not have any connectivity or latency issue. Now, if you see in this diagram, you have a centralized, Universal Control Plane and your trusted registry. And we were able to spin up a cluster, on-prem as well as in the cloud. And we made this happen, end to end in record time. So later, when we deploy this in production, we also added another cloud provider. So, what you see the box on the right side, we just duplicate test that box in another cloud platform. So, now other orchestration tool, managing on-prem and multicloud clusters. Now, in your use case, you may find this little, you know, more in favor of on-prem. But that fit in our use case. Later, we did have expanded the cluster of Universal Control Plane and DTR in the cloud as well. And the clusters have gone and hundreds and thousands of worker nodes span over two cloud providers, third being discussed. And this solution has been working so far, very good. We did not see any downtime, not a single instance. And we were able to provide multicloud platform, container Platform as a Service for our S&P Global. Thank you for your time. If any questions, I have put my LinkedIn and Twitter account holder, you're welcome to ask any question

Published Date : Sep 14 2020

SUMMARY :

and in the cloud. and the Windows use

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Daniel KhalilPERSON

0.99+

CitigroupORGANIZATION

0.99+

S&P GlobalORGANIZATION

0.99+

June 2015DATE

0.99+

S&P GlobalORGANIZATION

0.99+

Khalil AhmadPERSON

0.99+

LSEORGANIZATION

0.99+

six yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

New JerseyLOCATION

0.99+

PrudentialORGANIZATION

0.99+

United States of AmericaLOCATION

0.99+

New York CityLOCATION

0.99+

13 nodesQUANTITY

0.99+

University of ScrantonORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

OCIORGANIZATION

0.99+

University of PunjabORGANIZATION

0.99+

todayDATE

0.99+

LinuxTITLE

0.99+

three wordsQUANTITY

0.99+

thirdQUANTITY

0.99+

WindowsTITLE

0.99+

Linux FoundationORGANIZATION

0.99+

TwitterORGANIZATION

0.98+

KhalilPERSON

0.98+

three zonesQUANTITY

0.98+

bothQUANTITY

0.98+

HEC ParisORGANIZATION

0.98+

oneQUANTITY

0.98+

DockerTITLE

0.98+

NYU SternORGANIZATION

0.98+

five nodesQUANTITY

0.97+

two componentsQUANTITY

0.97+

both optionsQUANTITY

0.97+

Docker EnterpriseTITLE

0.97+

SecondlyQUANTITY

0.96+

single instanceQUANTITY

0.96+

firstQUANTITY

0.95+

KubernetesTITLE

0.94+

two cloud providersQUANTITY

0.94+

DataDogORGANIZATION

0.93+

DockerORGANIZATION

0.93+

twoQUANTITY

0.92+

Third requirementQUANTITY

0.92+

four nodeQUANTITY

0.91+

both clustersQUANTITY

0.91+

TRIUMORGANIZATION

0.91+

five node clusterQUANTITY

0.88+

Docker EnterpriseORGANIZATION

0.87+

US East CoastLOCATION

0.85+

one cloud providerQUANTITY

0.83+

LahoreLOCATION

0.82+

Open Container InitiativeORGANIZATION

0.81+

HPE Data Platform


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hi I'm Peter Burris analyst wiki Bond welcome to another wiki Bond the cube digital community event this one's sponsored by HPE like all of our digital community events this one will feature about 25 minutes of video followed by a crowd chat which will be your opportunity to ask your questions share your experiences and push forward the community's thinking on important issues facing business today so what are we talking about today over the course of the last say six months or so we've had a lot of conversations with our customers about the core issues that multi-cloud is going to engender with in business one of them clearly is how do we bring greater intelligence to how we move manage and administer data within the enterprise some of the more interesting conversations we've had turns out to have been with HPE and that's what we're going to talk about today we're going to be spending a few minutes with a number of HPE professionals as well as wiki bond professionals and thought leaders talking about the challenges that enterprises face as a consider intelligent data platforms so let's get started the first conversation that we're going to talk about is with Sandeep Singh who is the vice president at HPE Sandeep let's have that conversation about the challenges facing business today as it pertains to data so Sandeep I started off by making the observation that we've got this mountain of data coming in a lot of enterprises at the same time there seems to be a the the notion of how data is going to create new classes of business value seems to be pretty deeply ingrained and acculturated to a lot of decision-makers so they want more value out of their data but they're increasingly concerned about the volume of data that's going to hit them how in your conversations with customers are you hearing them talk about this fundamental challenge so that that's a great question you know across the board data is at the heart of applications pretty much everything that organizations do and when they look at it in conversations with customers it really boils down to a couple of areas one is how is my data just effortlessly available all the time it's always fast because fundamentally that's driving the speed of my business and that's incredibly important and how can my various audiences including developers just consume it like the public cloud in a self-service fashion and then the second part of that conversation is really about this massive data storm or mountain of data that's coming and it's gonna be available how do how do I Drive a competitive advantage how do i unlock these hidden insights in that data to uncover new revenue streams new customer experiences those are the areas that we hear about and fundamentally underlying it the challenge for customers is boy I have a lot of complexity and how do I ensure that I have the necessary insights in a the infrastructure management so I am not beholden am or my IT staff isn't beholden to fighting the IT fires that can cause disruptions and delays to projects so fundamentally we want to be able to push time and attention in the infrastructure in the administration of those devices that handle the data and move that time and attention up into how we deliver the data services and ideally up into the applications that are going to actually generate a new class of work within a digital business so I got that right absolutely it's about infrastructure that just runs seamlessly it's always on it's always fast people don't have to worry about what is it gonna go down is my data available or is it gonna slow down people don't want sometimes faster one always fast right I and that's governing the application performance that ultimately I can deliver and you talked about while geez if it if the data infrastructure just work seamlessly then can I eventually get to the applications and building the right pipelines ultimately for mining that data drive doing the AI and the machine learning analytics driven insides from there great discussion about the importance of data in the enterprise and how it's changing the way we think about business we're going to come back to Sandeep shortly but first let's spend some time talking with David floor who's the wiki bond analyst about the new mindset that is required to take advantage of some of these technologies and solve some of these problems specifically we need to think increasingly about data services let's hear what David has to say explain what that new mindset is yes I completely agree that that new mindset is required and it starts with you want to be able to deal with data wherever it's gonna be you in we are in a hybrid world hybrid cloud world your own clouds other public clouds partner clouds all of these need to be integrated and data is at the core of it so that the requirement then is to have rather than think about each individual piece is to think about services which are going to be applied to that data and can be applied not only to the data in one place but across all of that data and there isn't such a thing is just one set of services there going to be multiple sets of these services available but hope we will see some degree of conversion so they'll be the same lexicon and conceptual etcetera there'll be the same levels of things that are needed within each of these architectures but there'll be different emphasis on different areas we need to look at the way we administer data as a set of services that create outcomes for the business and as opposed to that are then translated into individual devices let me so let's jump into this notion of of what those services look like it seems as though we can list off a couple of them sure yeah so we must have of data reduction techniques so you must have deduplication compression type of techniques and you want to apply that our crosses bigger an amount of data as you can the more data you apply those the higher the levels of compression and deduplication you can get so that's clearly you've got those sort of sets of services across there you must backup and restore data in another place and be able to restore it quickly and easily there's that again is a service how quickly how integrated that recovery again that's going to be a variable that's a differentiation in the service exactly you're going to need data data protection in general end to end protection of once or another for example you need end-to-end encryption across there it's no longer good enough to say this bits been encrypted and then this bits the encrypted has got to be an end-to-end from one location to another location seamlessly provided that sort of thing well let me let me let me press on it cuz I think it's a really important point and and and it's you know the notion that the weakest link determines the strength of the chain right the what you just described says if you have encryption here and you don't have encryption there but because of the nature of digital you can start you start bringing that data together guess what the weakest link determines the protection of the overall data absolutely yes and then you need services like snapshots like like other services which provide much better usage of that data one of the great things about flash and that's brought about this about is that you can take a copy of that in real time and use that first totally different purpose and have that being changed in a different way so there are some really significantly great improvements you can have with services like snapshots and then you need some other services which are becoming even more important in my opinion the advent of [Music] bad actors in the in the world has really bought about the requirement for things like air gaps to have your data with the metadata all in one place and completely separated from everything else there are such things as called logical air gaps I think they as long as they're real in the real sense that the two paths can't interfere with each other those are going to be services which become very very important that's generally as an example of a general class of security data services they require so ultimately what we're describing is we're describing a new mindset that says that a storage administrator has to think about the services that the applications in the business requires and then seek out technologies that can provide those services at the price point with the degree of power consumption in the space or the environmental or with the type of maintenance and services related support that required based on the physical location the degree to which is under their control etc so that kind of what how we're thinking about this I think absolutely and the again if there's going to be multiple of these around in the marketplace one size is not going to fit all yeah you if you're wanting super fast response time at an edge and and if you don't get that response in time it's going to be no use whatsoever you're going to take you're going to have a different architecture a different way of doing it then if you need to be a hundred percent certain that every bit is captured and you know in a financial sort of environment but from a service standpoint you want to be able to look at that specific solution in a common way current policies current bilities correct great observations by David Flor it's very clear that for enterprises to get more control over their data their data assets and how they create value out of data they have to take a services mentality but the challenge that we all face is just taking a service mentality is not going to be enough we have to think about how we're going to organize those services into a platform that is pertinent and relevant to how business operates in a digital sense so let's go back to Sandeep saying and talk to him a little bit about this HPE notion of the intelligent data platform you've been one of the leaders in the complex systems arena for a long time and that includes storage where are you guys taking some of these technologies yeah so our strategy is to deliver an intelligent data platform and that intelligent data platform begins with workload optimized composable systems that can span the mission critical workloads general purpose secondary Big Data ai workloads we also deliver cloud data services that enable you to embrace hybrid cloud all of these systems including all the way to cloud data services are plumbed with data mobility and so for example use cases of even modernizing protection and going all the way to protecting cost effectively in the public cloud are enabled but really all of these systems then are imbued with a level of intelligence with a global intelligence engine that begins with predicting and proactively resolving issues before they occur but it goes way beyond that in delivering these prescriptive insights that are built on top of global learning across hundreds of thousands of systems with over a billion data points coming in on a daily basis to be able to deliver at the information at the fingertips of even the virtual machine admins to say this virtual machine is sapping the performance of this node and if you were to move it to this other node the performance or the SLA for all of the virtual machine farm will be even better we build on top of that to deliver pre-built automation so that it's hooked in with a REST API for strategy so that developers can consume it in a containerized application that's orchestrated with kubernetes or they can leverage it as an infrastructure as code whether it's with ansible puppet or chef we accelerate all of the application workloads and bring up where data protection and so it's available for the traditional business applications whether they're built on sa P or Oracle or sequel or the virtual machine farms or the new stack containerized applications and then customers can build their AI and big data pipelines on top of the infrastructure with a plethora of tools whether they're using basically Kafka lastic map our h2o that complete flexibility exists and within HPE were then able to turn around and deliver all of this with an as a service experience with HPE Greenlake to customers so that's where I want to take you next so how invasive is this going to be to a large shop well it is completely seamless in that way so with Greenlake we're able to deliver a fully managed service experience where the a cloud like page you go consumption model and combining it with HPE financial services we're also able to transform their organization in terms of this journey and make it a fully self-funding journey as well so today the typical administrator the typical shop has got a bunch of administrators that are administrating devices that's starting to change they've introduced automation that typically is associated with those devices but if we think three to five years out folks going to be thinking more in terms of data services and how those services get consumed and that's going to be what the storage part of I t's going to be thinking about they can almost become day to administrators if I got that right yes intelligence is fundamentally changing everything not only on the consumer side but on the business side of it a lot of what we've been talking about is intelligence is the game changer we actually see the dawn of the intelligence era and through this AI driven experience what it means for customers as a it enables a support experience that they just absolutely love secondly it means that the infrastructure is always on it's always fast it's always optimized in that sense and thirdly in terms of making these data services that are available and data insights that are being unlocked it's all about how can you enable your innovators and the data scientists and the data analysts to shrink that time to deriving insights from months literally down to minutes today there's this chasm that exists where there's a great concept of how can i leverage the AI technology and between that concept to making it real to thinking about a where can I actually fit and then how do i implement an end-to-end solution and a technology stack so then I just have a pipeline that's available to me that chasm literally is a matter of months and what we're able to deliver for example with HPE blue data is literally a catalog self-service experience where you can select and seamlessly build a pipeline literally in a matter of minutes and it's just all completely hosted seamlessly so making AI and machine learning essentially available for the mainstream through so the ontology data platform makes it possible to see these new classes of applications become routine without forcing the underlying storage administrators themselves to become data scientists absolutely all right the intelligent data platform is a very great concept but it's got to be made real and it's being made real today by HP Calvin Zito's a thought leader at HPE and he's done a series of chalk talks as it pertains to improving storage improving data management one of the more interesting ones was specifically on the intelligent data platform let's watch Calvin Zito's chalk talk hey guys I love it's time for another around the storage black chalk talk in this chalk top we're gonna look at the intelligent Data Platform let me set up the discussion at HP we see the dawn of the intelligence error the flatshare brought a speed with flash flash is now table stakes the cloud era brought new levels of agility and everyone expects as a service experience going forward the intelligence era with an AI driven experience for infrastructure operations in AI enabled unlocking of insights is poised to catapult businesses forward so the intelligent era will see the rise of the intelligent enterprise the enterprise will be always on always fast always agile to respond to different challenges but most of all the intelligent enterprise will be built for innovation innovation that can ilish new services revenue streams and business models every enterprise will need to have an intelligent data strategy where your data is always on and always fast automated an on-demand hybrid by design and applies global intelligence for visibility and lifecycle management our strategy is to deliver an intelligent data platform that turns your data challenges into business opportunities it begins with workload optimized composable systems for multiple workloads and we deliver cloud services for a hybrid cloud environment so that you can seamlessly move data throughout its lifecycle I'll have more on this in a moment the global intelligence engine infuses the entire infrastructure with intelligence it starts with predicting and proactively resolving issues before they occur it creates a unique workload fingerprint and these workload fingerprints combined with global learning enable us to drive recommendations to keep your app workloads and supporting infrastructure always optimized and delivering predictable speed we have a REST API first strategy and offer pre build automation connectors we bring Apple wear protection for both traditional and modern new stack application workloads and you can use the intelligent data platform to build and deliver flexible big data and AI pipelines for driving real-time analytics let's take a quick look at the portfolio of workload optimized composable systems these are systems across mission-critical general-purpose workloads as well secondary data and solutions for the emerging big data and AI applications because our portfolio is built for the cloud we offer comprehensive cloud data services for both production workloads and backup and archive in the cloud HPE info site provides the global intelligence across the portfolio and we give you flexibility of consuming these solutions as a service with HPE Greenlake I want to close with one more thing the HPE intelligent data platform has three main attributes first it's AI driven it removes the burden of managing infrastructure so that IT can focus on innovating and not administrating second it's built for cloud and it enables easy data and workload mobility across hybrid cloud environments finally the intelligent data platform delivers and as a service experience so you can be your own cloud provider to learn more go to hp.com intelligent data always love to hear from you on Twitter where you can find me as calvin zito you can find my blog at hp.com slash blog until next time thanks for joining me on this around the storage black chalk talk I think Calvin makes a compelling case that the opportunity to use these technologies is available today not something that we're just going to wait for in the future and that's good because one of the most important things that business has to think about is how are they going to utilize some of these new AI and related technologies to alter the way that they engage their customers run their businesses and handle their operations and ultimately improve their overall efficiency and effectiveness in the marketplaces it's very clear that this intelligent data platform is required to do many of the advanced AI things that business wants to do but it also requires AI in the platform itself so let's go back to Sandeep Singh and talk to Sandeep about how HPE foresees AI being embedded in them into the intelligent data platform so it can make possible greater utilization of AI and the rest of the application portfolio so we've got the significant problem we now have to figure out how to architect because we want predictability and certainty and and cost clarity and to how we're going to do this part of the challenge or part of the pushers new use cases for AI so we're trying to push data up so that we can build these new use cases but it seems that we have to also have to take some of those very same technologies and drive them down into the infrastructure so we get greater intelligence greater self meter and greater self management self administration within the infrastructure itself I got that right yes absolutely what becomes important for customers is when you think about data and ultimately storage that underlies the data is you can build and deploy fast and reliable storage but that's only solving half the problem greater than 50% of the issues actually end up arising from the higher layers for example you could change the firmware on the host bus adapter inside a server that can trickle down and cause a data unavailability or a performance slowdown issue you need to be able to predict that all the way at that higher level and then prevent that from occurring or your virtual machines might be in a state of over memory commitment at the server level or you CPU over commitment how do you discover those issues and prevent them from happening the other area that's becoming important is when we talk about this whole notion of cloud and hybrid cloud right that complexity tends to multiply exponentially so when the smarts you guys are going after building that hybrid cloud infrastructure fundamental challenges even as I've got a new workload and I want to place that you even on premises because you've had lots of silos how do you even figure out where should I place a workload a and how it'll react with workloads B and C on a given system and now you multiply that across hundreds of systems multiple clouds and the challenge you can see that it's multiplying exponentially oh yeah well I would say that having you know where do I put workload a the right answer today maybe here but the right answer tomorrow maybe some where else and you want to make sure that the service is right required to perform workload a our resident and available without a lot of administrative work necessary to ensure that there's commonality that's kind of what we mean by this hybrid multi cloud world isn't it absolutely and you when you start to think about it basically you end up in requiring and fundamentally needing the data mobility aspect of it because without the data you can't really move your workloads and you need consistency of data services so that your app if it's architected for reliability and a set of data services those just go along with the application and then you need building on top of that the portability for your actual application workload consistently managed with a hybrid management interface there so we want to use an intelligent data platform that's capable of assuring performance assuring availability and assuring security and going beyond that to then deliver a simplified automated experience right so that everything is just available through a self-service interface and then it brings along a level of intelligence that's just built into it globally so that in instead of trying to manually predict and landing in a world of reactive after IT fires have occurred is that there are sea of sensors and it's automatic the infrastructures automatically for predicting and preventing issues before they ever occur and then going beyond that how can you actually fingerprint the individual application workloads to then deliver prescriptive insights right to keep the infrastructure always optimized in that sense so discerning the patterns of data utilization so that the administrative costs of making sure the data is available where it needs to be number one number two assuring that data as assets is made available to developers as they create new applications new new things that create new work but also working very closely with the administrators so that they are not bound [Music] as you know an explosion in the number of tasks adapt to perform to keep this all working across the board yes I want to thank Sandeep Singh and calvin zito both of HPE as well as wiki bonds David Floyd for sharing their ideas on this crucially important topic of how we're going to take more of a platform approach to do a better job of managing crucial data assets in today's and tomorrow's digital businesses I'm Peter Burris and this has been another wiki bomb the cube digital community event sponsored by HPE now stay tuned for our crowd chat which will be your opportunity to ask your questions share your experiences and push for the community's thinking on important issues facing business today thank you very much for watching and now let's crouch [Music]

Published Date : Jul 26 2019

SUMMARY :

of it so that the requirement then is to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Sandeep SinghPERSON

0.99+

David FloydPERSON

0.99+

Peter BurrisPERSON

0.99+

David FlorPERSON

0.99+

threeQUANTITY

0.99+

HPEORGANIZATION

0.99+

David floorPERSON

0.99+

Silicon ValleyLOCATION

0.99+

tomorrowDATE

0.99+

calvin zitoPERSON

0.99+

HPORGANIZATION

0.99+

Calvin ZitoPERSON

0.99+

todayDATE

0.99+

greater than 50%QUANTITY

0.99+

second partQUANTITY

0.99+

AppleORGANIZATION

0.99+

Calvin ZitoPERSON

0.98+

two pathsQUANTITY

0.98+

five yearsQUANTITY

0.98+

over a billion data pointsQUANTITY

0.98+

SandeepPERSON

0.98+

hundreds of thousands of systemsQUANTITY

0.97+

each individual pieceQUANTITY

0.97+

bothQUANTITY

0.97+

first conversationQUANTITY

0.97+

hundreds of systemsQUANTITY

0.97+

eachQUANTITY

0.96+

oneQUANTITY

0.96+

firstQUANTITY

0.96+

three main attributesQUANTITY

0.95+

one setQUANTITY

0.95+

one placeQUANTITY

0.94+

about 25 minutesQUANTITY

0.94+

SandeepORGANIZATION

0.94+

one sizeQUANTITY

0.94+

wiki BondORGANIZATION

0.93+

hundred percentQUANTITY

0.92+

HPETITLE

0.91+

GreenlakeORGANIZATION

0.91+

secondQUANTITY

0.91+

half the problemQUANTITY

0.91+

one locationQUANTITY

0.87+

Palo Alto CaliforniaLOCATION

0.86+

first strategyQUANTITY

0.83+

kloadORGANIZATION

0.83+

a lot of enterprisesQUANTITY

0.81+

hp.comORGANIZATION

0.81+

a lot of decision-makersQUANTITY

0.81+

wiki bondORGANIZATION

0.81+

h2oTITLE

0.81+

Kafka lasticTITLE

0.79+

TwitterORGANIZATION

0.79+

of sensorsQUANTITY

0.71+

six monthsQUANTITY

0.69+

OracleORGANIZATION

0.67+

Morgan McLean, Google Cloud Platform & Ben Sigelman, LightStep | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain it's theCUBE, covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back. This is theCUBE's coverage of KubeCon, CloudNativeCon 2019. I'm Stu Miniman, my co-host for two days wall-to-wall coverage is Corey Quinn. Happy to welcome back to the program first Ben Sigelman, who is the co-founder and CEO of LightStep. And welcome to the program a first time Morgan McLean, who's a product manager at Google Cloud Platform. Gentlemen, thanks so much for joining us. >> Thanks for having us. >> Yeah. >> All right so, this was a last minute ad for us because you guys had some interesting news in the keynote. I think the feedback everybody's heard is there's too many projects and everything's overlapping, and how do I make a decision, but interesting piece is OpenCensus, which Morgan was doing, and OpenTracing, which Ben and LightStep were doing are now moving together for OpenTelemetry if I got it right. >> Yup. >> So, is it just everybody's holding hands and singing Kumbaya around the Kubernetes campfire, or is there something more to this? >> Well I mean, it started when the CNCF locked us in a room and told us there were too many projects. (Stu and Ben laughing) Really wouldn't let us leave. No, to be fair they did actually take us to a room and really start the ball rolling, but conversations have picked up for the last few months and personally I'm just really excited that it's gone so well. Initially if you told me six or nine months ago that this would happen, I would've been, given just the way the projects were going, both were growing very quickly, I would've been a little skeptical. But seriously, this merger's gone beyond my wildest dreams. It's awesome, both to unite the communities, it's awesome to unite the projects together. >> What has the response been from the communities on this merger? >> Very positive. >> Yeah. >> Very positive. I mean OpenTracing and OpenCensus are both projects with healthy user bases that are growing quickly and all that, but the reason people adopt them is to future-proof their own software. Because they want to adopt something that's going to be here to stay. And by having these two things out in the world that are both successful, and were overlapping in terms of their goals, I think the presence of two projects was actually really problematic for people. So, the fact that they're merging is net positive, absolutely for the end user community, also for the vendor community, it's a similar, it's almost exactly the same parallel thought process. When we met, the CNCF did broker an in-person meeting where they gave us some space and we all got together and, I don't know how many people were there, like 20 or 30 people in that room. >> They did let us leave the room though, yesterday, yeah that was nice. >> They did let us leave the room, that's true. We were not locked in there, (Morgan laughing) but they asked us in the beginning, essentially they asked everyone to state what their goals were. And almost all of us really had the same goal, which is just to try and make it easy for end users to adopt a telemetry project that they can stick with for the long haul. And so when you think of it in that respect, the merger seems completely obvious. It is true that it doesn't happen very often, and we could speculate about why that is. But I think in this case it was enabled by the fact that we had pretty good social relationships with OpenCensus people. I think Twitter tends to amplify negativity in the world in general, as I'm sure people, not a controversial statement. >> News alert, wait, absolutely the negatives are, it's something in the algorithm I think. >> Yeah, yeah. >> Maybe they should fix that. >> Yeah, yeah (laughs) exactly. And it was funny, there was a lot of perceived animosity between OpenTracing and OpenCensus a year ago, nine months ago, but when you actually talk to the principals in the projects and even just the general purpose developers who are doing a huge amount of work for both projects, that wasn't a sentiment that was widely held or widely felt I think. So, it has been a very kind of happy, it's a huge relief frankly, this whole thing has been a huge relief for all of us I think. >> Yeah it feels like the general ask has always been that, for tracing that doesn't suck. And that tends to be a bit of a tall order. The way that they have seemed to have responded to it is a credit to the maturity of the community. And I think it also speaks to a growing realization that no one wants to have a monoculture of just one option, any color you want so long as it's black. (Ben laughing) Versus there's 500 different things you can pick that all stand in that same spot, and at that point analysis paralysis kicks in. So this feels like it's a net positive for, absolutely everyone involved. >> Definitely. Yeah, one of the anecdotes that Ben and I have shared throughout a lot of these interviews is there were a lot of projects that wanted to include distributed tracing in them. So various web frameworks, I think, was it Hadoop or HBase was-- >> HBase and HDFS were jointly deciding what to do about instrumentation. >> Yeah, and so they would publish an issue on GitHub and someone from OpenTracing would respond saying hey, OpenTracing does this. And they'd be like oh, that's interesting, we can go build an implementation file and issue, someone from OpenCensus would respond and say, no wait, you should use OpenCensus. And with these being very similar yet incompatible APIs, these groups like HBase would sit it and be like, this isn't mature enough, I don't want to deal with this, I've got more important things to focus on right now. And rather than even picking one and ignoring the other, they just ignored tracing, right? With things moving to microservices with Kubernetes being so popular, I mean just look at this conference. Distributed tracing is no longer this kind of nice to have when you're a big company, you need it to understand how your app works and understand the cause of an outage, the cause of a problem. And when you had organizations like this that were looking at tracing instrumentation saying this is a bit of joke with two competing projects, no one was being served well. >> All right, so you talked about there were incompatible APIs, so how do we get from where we were to where we're going? >> So I can talk about that a little bit. The APIs are conceptually incredibly similar. And the part of the criteria for any new language, for OpenTelemetry, are that we are able to build a software bridge to both OpenTracing and OpenCensus that will translate existing instrumentation alongside OpenTelemetry instrumentation, and omit the correct data at the end. And we've built that out in Java already and then starting working a few other languages. It's not a tremendously difficult thing to do if that's your goal. I've worked on this stuff, I started working on Dapper in 2004, so it's been 15 years that I've been working in this space, and I have a lot of regrets about what we did to OpenTracing. And I had this unbelievably tempting thing to start Greenfield like, let's do it right this time, and I'm suppressing every last impulse to do that. And the only goal for this project technically is backwards compatibility. >> Yeah. >> 100% backwards compatibility. There's the famous XKCD comic where you have 14 standards and someone says, we need to create a new standard that will unify across all 14 standards, and now you have 15 standards. So, we don't want to follow that pattern. And by having the leadership from OpenTracing and OpenCensus involved wholesale in this new effort, as well as having these compatibility bridges, we can avoid the fate of IPv6, of Python 3 and things like that. Where the new thing is very appealing but it's so far from the old thing that you literally can't get there incrementally. So that's, our entire design constraint is make sure that backwards compatibility works, get to one project and then we can think about the grand unifying theory of a provability-- >> Ben you are ruining the best thing about standards is that there is so many of them to choose from. (everyone laughing) >> There's still plenty more growing in other areas (laughs) just in this particular space it's smaller. >> One could argue that your approach is nonstandard in its own right. (Ben laughing) And in my own experiments with distributed tracing it seems like step one is, first you have to go back and instrument everything you've built. And step two, hey come back here, because that's a lot of work. The idea of an organization going back and reinstrumenting everything they've already instrumented the first time. >> It's unlikely. >> Unless they build things very modularly and very portably to do exactly that, it's a bit of a heavy lift. >> I agree, yeah, yeah. >> So going forward, are people who have deployed one or the other of your projects going to have to go back and do a reinstrumentation, or will they unify and continue to work as they are? >> So, I would pause at the, I don't know, I would be making up the statistic, so I shouldn't. But let's say a vast majority, I'm thinking like 95, 98% of instrumentation is actually embedded in frameworks and libraries that people depend on. So you need to get Dropwizard, and Spring, and Django, and Flask, and Kafka, things like that need to be instrumented. The application code, the instrumentation, that burden is a bit lower. We announced something called SpecialAgent at LightStep last week, separate to all of this. It's kind of a funny combination, a typical APM agent will interpose on individual function calls, which is a very complicated and heavyweight thing. This doesn't do any of that, but it takes, it basically surveys what you have in your process, it looks for OpenTracing, and in the future OpenTelemetry instrumentation that matches that, and then installs it for you. So you don't have to do any manual work, just basically gluing tab A into slot B or whatever, you don't have to do any of that stuff which is what most OpenTracing instrumentation actually looks like these days. And you can get off the ground without doing any code modifications. So, I think that direction, which is totally portable and vendor neutral as well, as a layer on top of telemetry makes a ton of sense. There are also data translation efforts that are part of OpenCensus that are being ported in to OpenTelemetry that also serve to repurpose existing sources of correlated data. So, all these things are ways to take existing software and get it into the new world without requiring any code changes or redeploys. >> The long-term goal of this has always been that because web framework and client library providers will go and build the instrumentation into those, that when you're writing your own service that you're deploying in Kubernetes or somewhere else, that by linking one of the OpenTelemetry implementations that you get all of that tracing and context propagation, everything out of the box. You as a sort of individual developer are only using the APIs to define custom metrics, custom spans, things that are specific to your business. >> So Ben, you didn't name LightStep the same as your project. But that being said, a major piece of your business is going through a change here, what does this mean for LightStep? >> That's actually not the way I see it for what it's worth. LightStep as a product, since you're giving me an opportunity to talk about it, (laughs) foolish move on your part. No, I'm just kidding. But LightStep as a product is totally omnivorous, we don't really care where the data comes from. And translating any source of data that has a correlation ID and a timestamp is a pretty trivial exercise for us. So we do support OpenTracing, we also support OpenCensus for what it's worth. We'll support OpenTelemetry, we support a bunch of weird in-house things people have already built. We don't care about that at all. The reason that we're pursuing OpenTelemetry is two-fold, one is that we do want to see high quality data coming out of projects. We said at the keynote this morning, but observability literally cannot be better than your telemetry. If your telemetry sucks, your observability will also suck. It's just definitionally true, if you go back to the definition of observability from the '60s. And so we want high quality telemetry so our product can be awesome. Also, just as an individual, I'm a nerd about this stuff and I just like it. I mean a lot of my motivation for working on this is that I personally find it gratifying. It's not really a commercial thing, I just like it. >> Do you find that, as you start talking about this more and more with companies that are becoming cloud-native rapidly, either through digital transformation or from springing fully formed from the forehead of some God, however these born in the cloud companies tend to be, that they intuitively are starting to grasp the value of tracing? Or does this wind up being a much heavier lift as you start, showing them the golden path as it were? >> It's definitely grown like I-- >> Well I think the value of tracing, you see that after you see the negative value of a really catastrophic outage. >> Yes. >> I mean I was just talking to a bank, I won't name the bank but a bank at this conference, and they were talking about their own adoption of tracing, which was pretty slow, until they had a really bad outage where they couldn't transact for an hour and they didn't know which of the 200 services was responsible for the issue. And that really put some muscle behind their tracing initiative. So, typically it's inspired by an incident like that, and then, it's a bit reactive. Sometimes it's not but either way you end up in that place eventually. >> I'm a strong proponent of distributed tracing and I feel very seen by your last answer. (Ben laughing) >> But it's definitely made a big impact. If you came to conferences like this two years ago you'd have Adrian, or Yuri or someone doing a talk on distributed tracing. And they would always start by asking the 100 to 200 person audience, who here knows what distributed tracing is? And like five people would raise their hand and everyone else would be like no, that's why I'm here at the talk, I want to find out about it. And you go to ones now, or even last year, and now they have 400 people at the talk and you ask, who knows what distributed tracing is? And last year over half the people would raise their hand, now it's going to be even higher. And I think just beyond even anecdotes, clearly businesses are finding the value because they're implementing it. And you can see that through the number of companies that have an interest in OpenTracing, OpenTelemetry, OpenCensus. You can see that in the growth of startups in this space, LightStep and others. >> The other thing I like about OpenTelemetry as a name, it's a bit of a mouthful but that's, it's important for people to understand the distinction between telemetry and tracing data and actual solutions. I mean OpenTelemetry stops when the correct data is being omitted. And then what you do with that data is your own business. And I also think that people are realizing that tracing is more than just visualizing a single distributed trace. >> Yeah. >> The traces have an enormous amount of information in there about resource usage, security patterns, access patterns, large-scale performance patterns that are embedded in thousands of traces, that sort of data is making its way into products as well. And I really like that OpenTelemetry has clearly delineated that it stops with the telemetry. OpenTracing was confusing for people, where they'd want tracing and they'd adopt OpenTracing, and then be like, where's my UI? And it's like well no, it's not that kind of project. With OpenTelemetry I think we've been very clear, this is about getting >> The name is more clear yeah. >> very high quality data in a portable way with minimal effort. And then you can use that in any number of ways, and I like that distinction, I think it's important. >> Okay so, how do we make sure that the combination of these two doesn't just get watered-down to the least common denominator, or that Ben just doesn't get upset and say, forget it, I'm going to start from scratch and do it right this time? (Ben laughing) >> I'm not sure I see either of those two happening. To your comment about the least common denominator, we're starting from what I was just commenting about like two years ago, from very little prior art. Like yeah, you had projects like Zipkin, and Zipkin had its own instrumentation, but it was just for tracing, it was just for Zipkin. And you had Jaeger with its own. And so, I think we're so far away, in a few years the least common denominator will be dramatically better than what we have today. (laughs) And so at this stage, I'm not even remotely worried about that. And secondly to some vendor, I know, because Ben had just exampled this, >> Some vendor, some vendor. >> that's probably not, probably not the best one. But for vendor interference in this projects, I really don't see it. Both because of what we talked about earlier where the vendors right now want more telemetry. I meet with them, Ben meets with 'em, we all meet with 'em all the time, we work with them. And the biggest challenge we have is just the data we get is bad, right? Either we don't support certain platforms, we'll get traces that dead end at certain places, we don't get metrics with the same name for certain types of telemetry. And so this project is going to fix that and it's going to solve this problem for a lot of vendors who have this, frankly, a really strong economic incentive to play ball, and to contribute to it. >> Do you see that this, I guess merging of the two projects, is offering an opportunity to either of you to fix some, or revisit if not fix, some of the mistakes, as they were, of the past? I know every time I build something I look back and it was frankly terrible because that's the kind of developer I am. But are you seeing this, as someone who's probably, presumably much better at developing than I've ever been, as the opportunity to unwind some of the decisions you made earlier on, out of either ignorance or it didn't work out as well as you hoped? >> There are a couple of things about each project that we see an opportunity to correct here without doing any damage to the compatibility story. For OpenTracing it was just a bit too narrow. I mean I would talk a lot about how we want to describe the software, not the tracing system. But we kind of made a mistake in that we called it OpenTracing. Really people want, if a request comes in, they want to describe that request and then have it go to their tracing system, but also to their metric system, and to their logging stack, and to anywhere else, their security system. You should only have to instrument that once. So, OpenTracing was a bit too narrow. OpenCensus, we've talked about this a lot, built a really high quality reference implementation into the product, if OpenCensus, the product I mean. And that coupling created problems for vendors to adopt and it was a bit thick for some end users as well. So we are still keeping the reference implementation, but it's now cleanly decoupled. >> Yeah. >> So we have loose coupling, a la OpenTracing, but wider scope a la OpenCensus. And in that aspect, I think philosophically, this OpenTelemetry effort has taken the best of both worlds from these two projects that it started with. >> All right well, Ben and Morgan thank you so much for sharing. Best of luck and let us know if CNCF needs to pull you guys in a room a little bit more to help work through any of the issues. (Ben laughing) But thanks again for joining us. >> Thank you so much. >> Thanks for having us, it's been a pleasure. >> Yeah. >> All right for Corey Quinn, I'm Stu Miniman we'll be back to wrap up our day one of two days live coverage here from KubeCon, CloudNativeCon 2019, Barcelona, Spain. Thanks for watching theCUBE. (soft instrumental music)

Published Date : May 21 2019

SUMMARY :

Brought to you by Red Hat, the Cloud Native Happy to welcome back to the program first Ben Sigelman, because you guys had some interesting news in the keynote. and really start the ball rolling, like 20 or 30 people in that room. They did let us leave the room though, And so when you think of it in that respect, in the algorithm I think. and even just the general purpose developers And that tends to be a bit of a tall order. Yeah, one of the anecdotes that Ben and I have shared HBase and HDFS were jointly deciding And rather than even picking one and ignoring the other, And the only goal for this project There's the famous XKCD comic where you have 14 standards is that there is so many of them to choose from. growing in other areas (laughs) just in this One could argue that your to do exactly that, it's a bit of a heavy lift. and get it into the new world without requiring that by linking one of the OpenTelemetry implementations But that being said, a major piece of your business one is that we do want to see high quality data you see that after you see the negative value And that really put some muscle and I feel very seen by your last answer. You can see that in the growth of startups And then what you do with that data is your own business. And I really like that OpenTelemetry has clearly delineated and I like that distinction, I think it's important. And you had Jaeger with its own. Some vendor, And so this project is going to fix that and it's going to solve is offering an opportunity to either of you to fix some, and then have it go to their tracing system, And in that aspect, I think philosophically, Best of luck and let us know if CNCF needs to pull you guys Thanks for having us, Thanks for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ben SigelmanPERSON

0.99+

2004DATE

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

MorganPERSON

0.99+

20QUANTITY

0.99+

BenPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

StuPERSON

0.99+

100QUANTITY

0.99+

Python 3TITLE

0.99+

two projectsQUANTITY

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

JavaTITLE

0.99+

five peopleQUANTITY

0.99+

15 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

LightStepORGANIZATION

0.99+

AdrianPERSON

0.99+

last weekDATE

0.99+

bothQUANTITY

0.99+

400 peopleQUANTITY

0.99+

two daysQUANTITY

0.99+

KubeConEVENT

0.99+

30 peopleQUANTITY

0.99+

Morgan McLeanPERSON

0.99+

twoQUANTITY

0.99+

200 servicesQUANTITY

0.99+

each projectQUANTITY

0.99+

CNCFORGANIZATION

0.99+

nine months agoDATE

0.99+

YuriPERSON

0.99+

two thingsQUANTITY

0.99+

OpenCensusTITLE

0.99+

BothQUANTITY

0.99+

TwitterORGANIZATION

0.99+

oneQUANTITY

0.99+

OpenCensusORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

OpenTracingTITLE

0.99+

CloudNativeConEVENT

0.98+

two years agoDATE

0.98+

95, 98%QUANTITY

0.98+

200 personQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

one optionQUANTITY

0.98+

one projectQUANTITY

0.98+

first timeQUANTITY

0.98+

two-foldQUANTITY

0.98+

both projectsQUANTITY

0.97+

sixDATE

0.97+

GoogleORGANIZATION

0.97+

two years agoDATE

0.97+

15 standardsQUANTITY

0.97+

firstQUANTITY

0.97+

LightStepTITLE

0.96+

GitHubORGANIZATION

0.96+

CloudNativeCon 2019EVENT

0.96+

'60sDATE

0.96+

OpenTracingORGANIZATION

0.96+

ZipkinORGANIZATION

0.96+

Intelligent Data Platform


 

>> Hi. This is Dave Vellante with theCUBE, and we're running a series of events with various episodes. The first one is that the intelligence data platform. I'm here with Terry Richardson of HPE. Terry, what's that all about? >> So intelligent in a platform is really the rebranding of our complete storage offering, but it transcends into our infrastructure compute infrastructure. So what you'LL learn on this particular session is what makes HPE absolutely unique in this marketplace, leveraging are for the data center technology. >> So watch this CrowdChat will be holding events. We said we have episodes will be flowing in how twos and white papers and other great content will see you in the CrowdChat.

Published Date : Apr 15 2019

SUMMARY :

and we're running a series of events with various episodes. So intelligent in a platform is really the rebranding of our complete storage So watch this CrowdChat will be holding events.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Terry RichardsonPERSON

0.99+

TerryPERSON

0.99+

Dave VellantePERSON

0.99+

HPEORGANIZATION

0.99+

first oneQUANTITY

0.98+

CrowdChatTITLE

0.98+

twosQUANTITY

0.88+

theCUBEORGANIZATION

0.83+