Image Title

Search Results for IOX:

Evan Kaplan, InfluxData | AWS re:invent 2022


 

>>Hey everyone. Welcome to Las Vegas. The Cube is here, live at the Venetian Expo Center for AWS Reinvent 2022. Amazing attendance. This is day one of our coverage. Lisa Martin here with Day Ante. David is great to see so many people back. We're gonna be talk, we've been having great conversations already. We have a wall to wall coverage for the next three and a half days. When we talk to companies, customers, every company has to be a data company. And one of the things I think we learned in the pandemic is that access to real time data and real time analytics, no longer a nice to have that is a differentiator and a competitive all >>About data. I mean, you know, I love the topic and it's, it's got so many dimensions and such texture, can't get enough of data. >>I know we have a great guest joining us. One of our alumni is back, Evan Kaplan, the CEO of Influx Data. Evan, thank you so much for joining us. Welcome back to the Cube. >>Thanks for having me. It's great to be here. So here >>We are, day one. I was telling you before we went live, we're nice and fresh hosts. Talk to us about what's new at Influxed since the last time we saw you at Reinvent. >>That's great. So first of all, we should acknowledge what's going on here. This is pretty exciting. Yeah, that does really feel like, I know there was a show last year, but this feels like the first post Covid shows a lot of energy, a lot of attention despite a difficult economy. In terms of, you know, you guys were commenting in the lead into Big data. I think, you know, if we were to talk about Big Data five, six years ago, what would we be talking about? We'd been talking about Hadoop, we were talking about Cloudera, we were talking about Hortonworks, we were talking about Big Data Lakes, data stores. I think what's happened is, is this this interesting dynamic of, let's call it if you will, the, the secularization of data in which it breaks into different fields, different, almost a taxonomy. You've got this set of search data, you've got this observability data, you've got graph data, you've got document data and what you're seeing in the market and now you have time series data. >>And what you're seeing in the market is this incredible capability by developers as well and mostly open source dynamic driving this, this incredible capability of developers to assemble data platforms that aren't unicellular, that aren't just built on Hado or Oracle or Postgres or MySQL, but in fact represent different data types. So for us, what we care about his time series, we care about anything that happens in time, where time can be the primary measurement, which if you think about it, is a huge proportion of real data. Cuz when you think about what drives ai, you think about what happened, what happened, what happened, what happened, what's going to happen. That's the functional thing. But what happened is always defined by a period, a measurement, a time. And so what's new for us is we've developed this new open source engine called IOx. And so it's basically a refresh of the whole database, a kilo database that uses Apache Arrow, par K and data fusion and turns it into a super powerful real time analytics platform. It was already pretty real time before, but it's increasingly now and it adds SQL capability and infinite cardinality. And so it handles bigger data sets, but importantly, not just bigger but faster, faster data. So that's primarily what we're talking about to show. >>So how does that affect where you can play in the marketplace? Is it, I mean, how does it affect your total available market? Your great question. Your, your customer opportunities. >>I think it's, it's really an interesting market in that you've got all of these different approaches to database. Whether you take data warehouses from Snowflake or, or arguably data bricks also. And you take these individual database companies like Mongo Influx, Neo Forge, elastic, and people like that. I think the commonality you see across the volume is, is many of 'em, if not all of them, are based on some sort of open source dynamic. So I think that is an in an untractable trend that will continue for on. But in terms of the broader, the broader database market, our total expand, total available tam, lots of these things are coming together in interesting ways. And so the, the, the wave that will ride that we wanna ride, because it's all big data and it's all increasingly fast data and it's all machine learning and AI is really around that measurement issue. That instrumentation the idea that if you're gonna build any sophisticated system, it starts with instrumentation and the journey is defined by instrumentation. So we view ourselves as that instrumentation tooling for understanding complex systems. And how, >>I have to follow quick follow up. Why did you say arguably data bricks? I mean open source ethos? >>Well, I was saying arguably data bricks cuz Spark, I mean it's a great company and it's based on Spark, but there's quite a gap between Spark and what Data Bricks is today. And in some ways data bricks from the outside looking in looks a lot like Snowflake to me looks a lot like a really sophisticated data warehouse with a lot of post-processing capabilities >>And, and with an open source less >>Than a >>Core database. Yeah. Right, right, right. Yeah, I totally agree. Okay, thank you for that >>Part that that was not arguably like they're, they're not a good company or >>No, no. They got great momentum and I'm just curious. Absolutely. You know, so, >>So talk a little bit about IOx and, and what it is enabling you guys to achieve from a competitive advantage perspective. The key differentiators give us that scoop. >>So if you think about, so our old storage engine was called tsm, also open sourced, right? And IOx is open sourced and the old storage engine was really built around this time series measurements, particularly metrics, lots of metrics and handling those at scale and making it super easy for developers to use. But, but our old data engine only supported either a custom graphical UI that you'd build yourself on top of it or a dashboarding tool like Grafana or Chronograph or things like that. With IOCs. Two or three interventions were important. One is we now support, we'll support things like Tableau, Microsoft, bi, and so you're taking that same data that was available for instrumentation and now you're using it for business intelligence also. So that became super important and it kind of answers your question about the expanded market expands the market. The second thing is, when you're dealing with time series data, you're dealing with this concept of cardinality, which is, and I don't know if you're familiar with it, but the idea that that it's a multiplication of measurements in a table. And so the more measurements you want over the more series you have, you have this really expanding exponential set that can choke a database off. And the way we've designed IIS to handle what we call infinite cardinality, where you don't even have to think about that design point of view. And then lastly, it's just query performance is dramatically better. And so it's pretty exciting. >>So the unlimited cardinality, basically you could identify relationships between data and different databases. Is that right? Between >>The same database but different measurements, different tables, yeah. Yeah. Right. Yeah, yeah. So you can handle, so you could say, I wanna look at the way, the way the noise levels are performed in this room according to 400 different locations on 25 different days, over seven months of the year. And that each one is a measurement. Each one adds to cardinality. And you can say, I wanna search on Tuesdays in December, what the noise level is at 2:21 PM and you get a very quick response. That kind of instrumentation is critical to smarter systems. How are >>You able to process that data at at, in a performance level that doesn't bring the database to its knees? What's the secret sauce behind that? >>It's AUM database. It's built on Parque and Apache Arrow. But it's, but to say it's nice to say without a much longer conversation, it's an architecture that's really built for pulling that kind of data. If you know the data is time series and you're looking for a time measurement, you already have the ability to optimize pretty dramatically. >>So it's, it's that purpose built aspect of it. It's the >>Purpose built aspect. You couldn't take Postgres and do the same >>Thing. Right? Because a lot of vendors say, oh yeah, we have time series now. Yeah. Right. So yeah. Yeah. Right. >>And they >>Do. Yeah. But >>It's not, it's not, the founding of the company came because Paul Dicks was working on Wall Street building time series databases on H base, on MyQ, on other platforms and realize every time we do it, we have to rewrite the code. We build a bunch of application logic to handle all these. We're talking about, we have customers that are adding hundreds of millions to billions of points a second. So you're talking about an ingest level. You know, you think about all those data points, you're talking about ingest level that just doesn't, you know, it just databases aren't designed for that. Right? And so it's not just us, our competitors also build good time series databases. And so the category is really emergent. Yeah, >>Sure. Talk about a favorite customer story they think really articulates the value of what Influx is doing, especially with IOx. >>Yeah, sure. And I love this, I love this story because you know, Tesla may not be in favor because of the latest Elon Musker aids, but, but, but so we've had about a four year relationship with Tesla where they built their power wall technology around recording that, seeing your device, seeing the stuff, seeing the charging on your car. It's all captured in influx databases that are reporting from power walls and mega power packs all over the world. And they report to a central place at, at, at Tesla's headquarters and it reports out to your phone and so you can see it. And what's really cool about this to me is I've got two Tesla cars and I've got a Tesla solar roof tiles. So I watch this date all the time. So it's a great customer story. And actually if you go on our website, you can see I did an hour interview with the engineer that designed the system cuz the system is super impressive and I just think it's really cool. Plus it's, you know, it's all the good green stuff that we really appreciate supporting sustainability, right? Yeah. >>Right, right. Talk about from a, what's in it for me as a customer, what you guys have done, the change to IOCs, what, what are some of the key features of it and the key values in it for customers like Tesla, like other industry customers as well? >>Well, so it's relatively new. It just arrived in our cloud product. So Tesla's not using it today. We have a first set of customers starting to use it. We, the, it's in open source. So it's a very popular project in the open source world. But the key issues are, are really the stuff that we've kind of covered here, which is that a broad SQL environment. So accessing all those SQL developers, the same people who code against Snowflake's data warehouse or data bricks or Postgres, can now can code that data against influx, open up the BI market. It's the cardinality, it's the performance. It's really an architecture. It's the next gen. We've been doing this for six years, it's the next generation of everything. We've seen how you make time series be super performing. And that's only relevant because more and more things are becoming real time as we develop smarter and smarter systems. The journey is pretty clear. You instrument the system, you, you let it run, you watch for anomalies, you correct those anomalies, you re instrument the system. You do that 4 billion times, you have a self-driving car, you do that 55 times, you have a better podcast that is, that is handling its audio better, right? So everything is on that journey of getting smarter and smarter. So >>You guys, you guys the big committers to IOCs, right? Yes. And how, talk about how you support the, develop the surrounding developer community, how you get that flywheel effect going >>First. I mean it's actually actually a really kind of, let's call it, it's more art than science. Yeah. First of all, you you, you come up with an architecture that really resonates for developers. And Paul Ds our founder, really is a developer's developer. And so he started talking about this in the community about an architecture that uses Apache Arrow Parque, which is, you know, the standard now becoming for file formats that uses Apache Arrow for directing queries and things like that and uses data fusion and said what this thing needs is a Columbia database that sits behind all of this stuff and integrates it. And he started talking about it two years ago and then he started publishing in IOCs that commits in the, in GitHub commits. And slowly, but over time in Hacker News and other, and other people go, oh yeah, this is fundamentally right. >>It addresses the problems that people have with things like click cows or plain databases or Coast and they go, okay, this is the right architecture at the right time. Not different than original influx, not different than what Elastic hit on, not different than what Confluent with Kafka hit on and their time is you build an audience of people who are committed to understanding this kind of stuff and they become committers and they become the core. Yeah. And you build out from it. And so super. And so we chose to have an MIT open source license. Yeah. It's not some secondary license competitors can use it and, and competitors can use it against us. Yeah. >>One of the things I know that Influx data talks about is the time to awesome, which I love that, but what does that mean? What is the time to Awesome. Yeah. For developer, >>It comes from that original story where, where Paul would have to write six months of application logic and stuff to build a time series based applications. And so Paul's notion was, and this was based on the original Mongo, which was very successful because it was very easy to use relative to most databases. So Paul developed this commitment, this idea that I quickly joined on, which was, hey, it should be relatively quickly for a developer to build something of import to solve a problem, it should be able to happen very quickly. So it's got a schemaless background so you don't have to know the schema beforehand. It does some things that make it really easy to feel powerful as a developer quickly. And if you think about that journey, if you feel powerful with a tool quickly, then you'll go deeper and deeper and deeper and pretty soon you're taking that tool with you wherever you go, it becomes the tool of choice as you go to that next job or you go to that next application. And so that's a fundamental way we think about it. To be honest with you, we haven't always delivered perfectly on that. It's generally in our dna. So we do pretty well, but I always feel like we can do better. >>So if you were to put a bumper sticker on one of your Teslas about influx data, what would it >>Say? By the way, I'm not rich. It just happened to be that we have two Teslas and we have for a while, we just committed to that. The, the, so ask the question again. Sorry. >>Bumper sticker on influx data. What would it say? How, how would I >>Understand it be time to Awesome. It would be that that phrase his time to Awesome. Right. >>Love that. >>Yeah, I'd love it. >>Excellent time to. Awesome. Evan, thank you so much for joining David, the >>Program. It's really fun. Great thing >>On Evan. Great to, you're on. Haven't Well, great to have you back talking about what you guys are doing and helping organizations like Tesla and others really transform their businesses, which is all about business transformation these days. We appreciate your insights. >>That's great. Thank >>You for our guest and Dave Ante. I'm Lisa Martin, you're watching The Cube, the leader in emerging and enterprise tech coverage. We'll be right back with our next guest.

Published Date : Nov 29 2022

SUMMARY :

And one of the things I think we learned in the pandemic is that access to real time data and real time analytics, I mean, you know, I love the topic and it's, it's got so many dimensions and such Evan, thank you so much for joining us. It's great to be here. Influxed since the last time we saw you at Reinvent. terms of, you know, you guys were commenting in the lead into Big data. And so it's basically a refresh of the whole database, a kilo database that uses So how does that affect where you can play in the marketplace? And you take these individual database companies like Mongo Influx, Why did you say arguably data bricks? And in some ways data bricks from the outside looking in looks a lot like Snowflake to me looks a lot Okay, thank you for that You know, so, So talk a little bit about IOx and, and what it is enabling you guys to achieve from a And the way we've designed IIS to handle what we call infinite cardinality, where you don't even have to So the unlimited cardinality, basically you could identify relationships between data And you can say, time measurement, you already have the ability to optimize pretty dramatically. So it's, it's that purpose built aspect of it. You couldn't take Postgres and do the same So yeah. And so the category is really emergent. especially with IOx. And I love this, I love this story because you know, what you guys have done, the change to IOCs, what, what are some of the key features of it and the key values in it for customers you have a self-driving car, you do that 55 times, you have a better podcast that And how, talk about how you support architecture that uses Apache Arrow Parque, which is, you know, the standard now becoming for file And you build out from it. One of the things I know that Influx data talks about is the time to awesome, which I love that, So it's got a schemaless background so you don't have to know the schema beforehand. It just happened to be that we have two Teslas and we have for a while, What would it say? Understand it be time to Awesome. Evan, thank you so much for joining David, the Great thing Haven't Well, great to have you back talking about what you guys are doing and helping organizations like Tesla and others really That's great. You for our guest and Dave Ante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

Evan KaplanPERSON

0.99+

six monthsQUANTITY

0.99+

EvanPERSON

0.99+

TeslaORGANIZATION

0.99+

Influx DataORGANIZATION

0.99+

PaulPERSON

0.99+

55 timesQUANTITY

0.99+

twoQUANTITY

0.99+

2:21 PMDATE

0.99+

Las VegasLOCATION

0.99+

Dave AntePERSON

0.99+

Paul DicksPERSON

0.99+

six yearsQUANTITY

0.99+

last yearDATE

0.99+

hundreds of millionsQUANTITY

0.99+

Mongo InfluxORGANIZATION

0.99+

4 billion timesQUANTITY

0.99+

TwoQUANTITY

0.99+

DecemberDATE

0.99+

MicrosoftORGANIZATION

0.99+

InfluxedORGANIZATION

0.99+

AWSORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

InfluxORGANIZATION

0.99+

IOxTITLE

0.99+

MySQLTITLE

0.99+

threeQUANTITY

0.99+

TuesdaysDATE

0.99+

each oneQUANTITY

0.98+

400 different locationsQUANTITY

0.98+

25 different daysQUANTITY

0.98+

first setQUANTITY

0.98+

an hourQUANTITY

0.98+

FirstQUANTITY

0.98+

six years agoDATE

0.98+

The CubeTITLE

0.98+

OneQUANTITY

0.98+

Neo ForgeORGANIZATION

0.98+

second thingQUANTITY

0.98+

Each oneQUANTITY

0.98+

Paul DsPERSON

0.97+

IOxORGANIZATION

0.97+

todayDATE

0.97+

TeslasORGANIZATION

0.97+

MITORGANIZATION

0.96+

PostgresORGANIZATION

0.96+

over seven monthsQUANTITY

0.96+

oneQUANTITY

0.96+

fiveDATE

0.96+

Venetian Expo CenterLOCATION

0.95+

Big Data LakesORGANIZATION

0.95+

ClouderaORGANIZATION

0.94+

ColumbiaLOCATION

0.94+

InfluxDataORGANIZATION

0.94+

Wall StreetLOCATION

0.93+

SQLTITLE

0.92+

ElasticTITLE

0.92+

Data BricksORGANIZATION

0.92+

Hacker NewsTITLE

0.92+

two years agoDATE

0.91+

OracleORGANIZATION

0.91+

AWS Reinvent 2022EVENT

0.91+

Elon MuskerPERSON

0.9+

SnowflakeORGANIZATION

0.9+

ReinventORGANIZATION

0.89+

billions of points a secondQUANTITY

0.89+

four yearQUANTITY

0.88+

ChronographTITLE

0.88+

ConfluentTITLE

0.87+

SparkTITLE

0.86+

ApacheORGANIZATION

0.86+

SnowflakeTITLE

0.85+

GrafanaTITLE

0.85+

GitHubORGANIZATION

0.84+

Anais Dotis Georgiou, InfluxData | Evolving InfluxDB into the Smart Data Platform


 

>>Okay, we're back. I'm Dave Valante with The Cube and you're watching Evolving Influx DB into the smart data platform made possible by influx data. Anna East Otis Georgio is here. She's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into realtime analytics. Anna is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IO X is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory, of course for speed. It's a kilo store, so it gives you compression efficiency, it's gonna give you faster query speeds, it gonna use store files and object storages. So you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOCs is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so a lot there. Now we talked to Brian about how you're using Rust and and which is not a new programming language and of course we had some drama around Russ during the pandemic with the Mozilla layoffs, but the formation of the Russ Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Rust was chosen because of his exceptional performance and rebi reliability. So while rust is synt tactically similar to c c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on card for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ, Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fixed race conditions to protect against buffering overflows and to ensure thread safe ay caching structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learned about the the new engine and the, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you're really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data and so much of the efficiency and performance of IOCs comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of illustrate why calmer data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then neighbor each other and when they neighbor each other in the storage format. This provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the min and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one times stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, calmer data fit framework. So that's where a lot of the advantages come >>From. Okay. So you've basically described like a traditional database, a row approach, but I've seen like a lot of traditional databases say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native it, is it not as effective as the, is the form not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. >>Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in rust, but what does it bring to to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps influx DB IOx is that okay, it's great if you can write unlimited amount of cardinality into influx cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PDA's data frames as well and all of the machine learning tools associated with pandas. >>Okay. You're also leveraging par K in the platform course. We heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par K and why is it important? >>Sure. So Par K is the calm oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and pandas so it supports a broader ecosystem. Parque files also take very little disc disc space and they're faster to scan because again they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and these, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call it the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOCs and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and I just wanna learn more, then I would encourage you to go to the monthly tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the influx D DB underscore IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about IOCs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how influx TB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and you guys super responsive, so really appreciate that. All right, thank you so much and East for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yokum. He's the director of engineering for Influx Data and we're gonna talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't wanna miss this.

Published Date : Nov 8 2022

SUMMARY :

to increase the granularity of time series analysis analysis and bring the world of data Hi, thank you so much. So you got very cost effective approach. it aims to have no limits on cardinality and also allow you to write any kind of event data that So lots of platforms, lots of adoption with rust, but why rust as an all the fine grain control, you need to take advantage of even to even today you do a lot of garbage collection in these, in these systems and And so you can picture this table where we have like two rows with the two temperature values for order to answer that question and you have those immediately available to you. to pluck out that one temperature value that you want at that one times stamp and do that for every about is really, you know, kind of native it, is it not as effective as the, Yeah, it's, it's not as effective because you have more expensive compression and because So let's talk about Arrow data fusion. It also has a PANDAS API so that you could take advantage of What are you doing with So it's important What's the value that you're bringing to the community? here is that the more you contribute and build those up, then the kind of summarize, you know, where what, what the big takeaways are from your perspective. So if there's a particular technology or stack that you wanna dive deeper into and want and you guys super responsive, so really appreciate that. I really appreciate it. Influx Data and we're gonna talk about how you update a SaaS engine while

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YokumPERSON

0.99+

Jeff FrickPERSON

0.99+

BrianPERSON

0.99+

AnnaPERSON

0.99+

James BellengerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

three monthsQUANTITY

0.99+

16 timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

PythonTITLE

0.99+

mobile.twitter.comOTHER

0.99+

Influx DataORGANIZATION

0.99+

iOSTITLE

0.99+

TwitterORGANIZATION

0.99+

30,000 feetQUANTITY

0.99+

Russ FoundationORGANIZATION

0.99+

ScalaTITLE

0.99+

Twitter LiteTITLE

0.99+

two rowsQUANTITY

0.99+

200 megabyteQUANTITY

0.99+

NodeTITLE

0.99+

Three months agoDATE

0.99+

one applicationQUANTITY

0.99+

both placesQUANTITY

0.99+

each rowQUANTITY

0.99+

Par KTITLE

0.99+

Anais Dotis GeorgiouPERSON

0.99+

one languageQUANTITY

0.98+

first oneQUANTITY

0.98+

15 engineersQUANTITY

0.98+

Anna East Otis GeorgioPERSON

0.98+

bothQUANTITY

0.98+

one secondQUANTITY

0.98+

25 engineersQUANTITY

0.98+

About 800 peopleQUANTITY

0.98+

sqlTITLE

0.98+

Node Summit 2017EVENT

0.98+

two temperature valuesQUANTITY

0.98+

one timesQUANTITY

0.98+

c plus plusTITLE

0.97+

RustTITLE

0.96+

SQLTITLE

0.96+

todayDATE

0.96+

InfluxORGANIZATION

0.95+

under 600 kilobytesQUANTITY

0.95+

firstQUANTITY

0.95+

c plus plusTITLE

0.95+

ApacheORGANIZATION

0.95+

par KTITLE

0.94+

ReactTITLE

0.94+

RussORGANIZATION

0.94+

About three months agoDATE

0.93+

8:30 AM Pacific timeDATE

0.93+

twitter.comOTHER

0.93+

last decadeDATE

0.93+

NodeORGANIZATION

0.92+

HadoopTITLE

0.9+

InfluxDataORGANIZATION

0.89+

c c plus plusTITLE

0.89+

CubeORGANIZATION

0.89+

each columnQUANTITY

0.88+

InfluxDBTITLE

0.86+

Influx DBTITLE

0.86+

MozillaORGANIZATION

0.86+

DB IOxTITLE

0.85+

Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 2 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Evolving InfluxDB into the Smart Data Platform Full Episode


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YoakumPERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

Dave ValantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

GoogleORGANIZATION

0.99+

16 timesQUANTITY

0.99+

two rowsQUANTITY

0.99+

New York CityLOCATION

0.99+

60,000 peopleQUANTITY

0.99+

RustTITLE

0.99+

InfluxORGANIZATION

0.99+

Influx DataORGANIZATION

0.99+

todayDATE

0.99+

Influx DataORGANIZATION

0.99+

PythonTITLE

0.99+

three expertsQUANTITY

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

each rowQUANTITY

0.99+

two laneQUANTITY

0.99+

TodayDATE

0.99+

Noble nineORGANIZATION

0.99+

thousandsQUANTITY

0.99+

FluxORGANIZATION

0.99+

Influx DBTITLE

0.99+

each columnQUANTITY

0.99+

270 terabytesQUANTITY

0.99+

cube.netOTHER

0.99+

twiceQUANTITY

0.99+

BryanPERSON

0.99+

PandasTITLE

0.99+

c plus plusTITLE

0.99+

three years agoDATE

0.99+

twoQUANTITY

0.99+

more than a decadeQUANTITY

0.98+

ApacheORGANIZATION

0.98+

dozensQUANTITY

0.98+

free@influxdbu.comOTHER

0.98+

30,000 feetQUANTITY

0.98+

Rust FoundationORGANIZATION

0.98+

two temperature valuesQUANTITY

0.98+

In Flux DataORGANIZATION

0.98+

one time stampQUANTITY

0.98+

tomorrowDATE

0.98+

RussPERSON

0.98+

IOTORGANIZATION

0.98+

Evolving InfluxDBTITLE

0.98+

firstQUANTITY

0.97+

Influx dataORGANIZATION

0.97+

oneQUANTITY

0.97+

first oneQUANTITY

0.97+

Influx DB UniversityORGANIZATION

0.97+

SQLTITLE

0.97+

The CubeTITLE

0.96+

Influx DB CloudTITLE

0.96+

single serverQUANTITY

0.96+

KubernetesTITLE

0.96+

Anais Dotis Georgiou, InfluxData


 

(upbeat music) >> Okay, we're back. I'm Dave Vellante with The Cube and you're watching Evolving InfluxDB into the smart data platform made possible by influx data. Anais Dotis-Georgiou is here. She's a developer advocate for influx data and we're going to dig into the rationale and value contribution behind several open source technologies that InfluxDB is leveraging to increase the granularity of time series analysis and bring the world of data into realtime analytics. Anais welcome to the program. Thanks for coming on. >> Hi, thank you so much. It's a pleasure to be here. >> Oh, you're very welcome. Okay, so IOx is being touted as this next gen open source core for InfluxDB. And my understanding is that it leverages in memory, of course for speed. It's a kilometer store, so it gives you compression efficiency it's going to give you faster query speeds, it's going to see you store files and object storages so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features but what are the high level value points that people should understand? >> Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metric queries we also want to have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like SQL, Python and maybe even Pandas in the future. >> Okay, so a lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs but the formation of the Rust Foundation really addressed any of those concerns and you got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with Rust but why Rust as an alternative to say C++ for example? >> Sure, that's a great question. So Rust was chosen because of his exceptional performance and reliability. So while Rust is syntactically similar to C++ and it has similar performance it also compiles to a native code like C++ But unlike C++ it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And Rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like C++. So Rust like helps meet that requirement of having no limits on cardinality, for example, because it's we're also using the Rust implementation of Apache Arrow and this control over memory and also Rust's packaging system called Crates IO offers everything that you need out of the box to have features like async and await to fix race conditions to protect against buffering overflows and to ensure thread safe async caching structures as well. So essentially it's just like has all the control all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high cardinality use cases. >> Yeah, and the more I learn about the new engine and the platform IOx et cetera, you see things like the old days not even to even today you do a lot of garbage collection in these systems and there's an inverse, impact relative to performance. So it looks like you're really, the community is modernizing the platform but I want to talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We know that, but please explain why, what is Arrow and what does it bring to InfluxDB? >> Sure. Yeah. So Arrow is a a framework for defining in memory column data. And so much of the efficiency and performance of IOx comes from taking advantage of column data structures. And I will, if you don't mind, take a moment to kind of illustrate why column data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our store. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the store. Well, usually our room temperature is regulated so those values don't change very often. So when you have calm oriented storage essentially you take each row each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you want to define like the min and max value of the temperature in the room across a thousand different points you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of column oriented storage. So if you had a row oriented storage, you'd first have to look at every field like the temperature in the room and the temperature of the store. You'd have to go across every tag value that maybe describes where the room is located or what model the store is. And every timestamp you then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as column and Apache Arrow is in memory column data column data fit framework. So that's where a lot of the advantages come from. >> Okay. So you've basically described like a traditional database a row approach, but I've seen like a lot of traditional databases say, okay, now we've got we can handle Column format versus what you're talking about is really kind of native is it not as effective as the former not as effective because it's largely a bolt on? Can you like elucidate on that front? >> Yeah, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why row oriented storage isn't as efficient as column oriented storage. >> Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in Rust but what does it bring to to the table here? >> Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps InfluxDB IOx is that okay it's great if you can write unlimited amount of cardinality into InfluxDB, but if you don't have a query engine that can successfully query that data then I don't know how much value it is for you. So data fusion helps enable the query process and transformation of that data. It also has a Pandas API so that you could take advantage of Pandas data frames as well and all of the machine learning tools associated with Pandas. >> Okay. You're also leveraging Par-K in the platform course. We heard a lot about Par-K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par-K and why is it important? >> Sure. So Par-K is the column oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and Pandas so it supports a broader ecosystem. Par-K files also take very little disc space and they're faster to scan because again they're column oriented, in particular I think Par-K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the benefits of Par-K. >> Got it. Very popular. So and these, what exactly is Influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >> Sure. So InfluxDB first has contributed a lot of different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing Influx. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >> Yeah. Got it. You got that virtuous cycle going people call it the flywheel. Give us your last thoughts and kind of summarize, what the big takeaways are from your perspective. >> So I think the big takeaway is that, Influx data is doing a lot of really exciting things with InfluxDB IOx and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOx the challenges associated with it and all of the hard work questions and I just want to learn more then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the InfluxDB underscore IOx channel specifically to learn more about how to join those office hours and those monthly tech talks as well as ask any questions they have about IOx what to expect and what you'd like to learn more about. I as a developer advocate, I want to answer your questions. So if there's a particular technology or stack that you want to dive deeper into and want more explanation about how InfluxDB leverages it to build IOx, I will be really excited to produce content on that topic for you. >> Yeah, that's awesome. You guys have a really rich community collaborate with your peers, solve problems and you guys super responsive, so really appreciate that. All right, thank you so much Anais for explaining all this open source stuff to the audience and why it's important to the future of data. >> Thank you. I really appreciate it. >> All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakam. He's the director of engineering for Influx Data and we're going to talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't want to miss this. (upbeat music)

Published Date : Oct 18 2022

SUMMARY :

and bring the world of data It's a pleasure to be here. it's going to give you and some of the most impressive ones to me and you got big guns and dangling pointers are the main classes Yeah, and the more I and the temperature of the store. is it not as effective as the former not and because you can't scan to to the table here? So the way that it helps Par-K in the platform course. and they're faster to scan So and these, what exactly is Influx data and appreciation of the and kind of summarize, of the hard work questions and you guys super responsive, I really appreciate it. and we're going to talk about

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YoakamPERSON

0.99+

BrianPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AnaisPERSON

0.99+

two rowsQUANTITY

0.99+

16 timesQUANTITY

0.99+

Influx DataORGANIZATION

0.99+

each rowQUANTITY

0.99+

PythonTITLE

0.99+

RustTITLE

0.99+

C++TITLE

0.99+

SQLTITLE

0.99+

Anais Dotis GeorgiouPERSON

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

Rust FoundationORGANIZATION

0.99+

30,000 feetQUANTITY

0.99+

first oneQUANTITY

0.99+

MozillaORGANIZATION

0.99+

PandasTITLE

0.98+

InfluxDataORGANIZATION

0.98+

InfluxORGANIZATION

0.98+

IOxTITLE

0.98+

each columnQUANTITY

0.97+

one time stampQUANTITY

0.97+

firstQUANTITY

0.97+

InfluxTITLE

0.96+

Anais Dotis-GeorgiouPERSON

0.95+

Crates IOTITLE

0.94+

IOxORGANIZATION

0.94+

two temperature valuesQUANTITY

0.93+

ApacheORGANIZATION

0.93+

todayDATE

0.93+

8:30 AM Pacific timeDATE

0.92+

WednesdayDATE

0.91+

one temperatureQUANTITY

0.91+

two temperature valuesQUANTITY

0.91+

InfluxDB IOxTITLE

0.9+

influxORGANIZATION

0.89+

last decadeDATE

0.88+

single rowQUANTITY

0.83+

a ton more dataQUANTITY

0.81+

thousandQUANTITY

0.8+

dozens of other featuresQUANTITY

0.8+

a thousand different pointsQUANTITY

0.79+

HadoopTITLE

0.77+

Par-KTITLE

0.76+

pointsQUANTITY

0.75+

eachQUANTITY

0.75+

SlackTITLE

0.74+

Evolving InfluxDBTITLE

0.68+

kilometerQUANTITY

0.67+

ArrowTITLE

0.62+

The CubeORGANIZATION

0.61+

Susie Wee, Cisco DevNet | Cisco Live EU 2019


 

>> Live from Barcelona, Spain, its theCUBE, covering Cisco Live! Europe, brought to you by Cisco, and its ecosystem partners. >> Hello everyone, welcome back to theCUBE's live coverage here in Barcelona, Spain, for Cisco Live! Europe 2019, I'm John Furrier, with my co-host Dave Vellante as well as Stu Miniman has been co-hosting all week, three days of coverage, we're in day two. We're here with very special guest, we're in the DevNet Zone, and we're here with the leader of the DevNet team of Cisco, Susie Wee, Senior Vice President, CTO of Cisco DevNet, welcome, good to see you. >> Thank you, good to see you, and I'm glad that we have you here again in the DevNet Zone. >> You've been running around, it's been super exciting to watch the evolution, we chatted a couple of years ago, okay we're going to get some developer-centric APIs and a small community growing, now it's exploding. (Susie laughs) Feature of the show, the size gets bigger every year. >> It was interesting, yeah, we took a chance on it right? So we didn't know and you took this bet with me is just that the network is becoming programmable, the infrastructure is programmable, and not only is the technology becoming programmable, but we can take the community of networkers, IT infrastructure folks, app developers and get them to understand the programmability of the infrastructure, and it's really interesting that, you know, these classes are packed, they're very deep they're very technical, the community's getting along and, you know, networkers are developers. >> Yeah you know, you nailed it, because I think as a CTO, you understood the dev-ops movement, saw that in cloud. And I remember my first conversation with you like, you know, the network has a dev-ops angle too if you can make it programmable, and that's what it's done, and you're seeing Cisco's wide having this software extraction, ACI anywhere, hyperflux anywhere, connected to the cloud, now Edge. APIs are at the center, the DNA Center platform. >> Yes! >> API First, very successful project. >> Yes yes, it's-- >> This is the new DNA of Cisco is APIs, this is what it's all about. >> It is, it is and you know, like at first, you know, when we started this journey five years ago a few of our products had APIs, like a few of them were programmable. But you know, you don't take your network in overnight, it's programmable when you have this type of thing. But we've been building it in, and now practically every product is programmable, every product has APIs, so now you have a really rich fabric of yeah, security, data center, enterprises and campus and branch networks. Like, and it can now, put together really interesting things. >> Well congratulations, it happened and it's happening, so I got to ask the question, now that it's happening, happened and happening, continuing to happen, what's the impact to the customer base because now you're now seeing Cisco clearly defining the network and the security aspect of what the network can do, foundationally, and then enabling it to be programmable. >> Yeah. >> What's happening now for you guys, obviously apps could take advantage of it, but what else is the side effect of this investment? >> Yeah so, the interesting thing is, if we take a look at the industry at large, what happens is, you kind of have the traditional view of, IT, you know, so if you take a look at IT, you know, what do you need it for? I need it to get my compute, just give me my servers, give me my network, and let's just hope it works. And then it was also viewed as being old, like I can get all this stuff on the cloud, and I can just do my development there, why do I need all of that stuff right? But once you take it, and you know, the industry has come along, what happens is, you need to bring those systems together, you need to modernize your IT, you need to be able to just, you know, take in the cloud services, to take the applications come across, but the real reason you need it is because you want to impact the business, you know, so kind of what happens is like, every business in the world, every, is being disrupted right, and if you take a look, it has a digital disruptor going on. If you're in retail, then, you know, you're a brick and mortar, you know, traditionally a brick and mortar store kind of company, and then you have an online retailer that's kind of starting to eat your lunch, right, if you're in banking, you have the digital disruption like every, manufacturing is starting to get interesting and you know, what you're doing in energy. So all of this has kind of disruption angles, but really the key is that, IT holds the keys. So, IT can sit there and keep its old infrastructure and say, I have all this responsibility, I'm running this machinery, I have this customer database, or you can modernize, right? And so you can either hold your business back, or you can modernize, make it programmable and then suddenly allow cloud native, public, private cloud, deploy new applications and services and suddenly become an innovative platform for the company, then you can solve business problems and make that real, and we're actually seeing that's becoming real. (laughs) >> Well and you're seeing it right in front of us. So a big challenge there of what you just mentioned, is just having the skills to be able to do that but the appetite of this audience to absorb that knowledge is very very high, so for example, we've been here all week watching, essentially Cisco users, engineers, absorb this new content to learn how to basically program infrastructure. >> That's right, and it's not Cisco employees, it's the community, it's the world of like, Cisco-certified engineers like, people who are doing networking and IT for companies and partners around the world. >> And so, what do they have to go through to get from, you know, where they were, not modernized to modernized? >> Yeah, and actually, and that's a good way 'cause when we look back to five years ago, it was a question, like we knew the technology was going to become programmable and the question is, are these network guys, you know, are these IT guys everywhere are they going to stay in the old world are they really going to be the ones that can work in the new world, or are we going to hire a bunch of new software guys who just know it, are cloud native, they get it all, to do it all. Well, it doesn't work that way because to work in oil and gas, you need some expertise in that and those guys know about it, to work in, you know, retail and banking, and all of these, there's some industry knowledge that you need to have. But then you need to pick up that software skill and five years ago, we didn't know if they would make that transition, but we created DevNet to give them the tools within their language and kind of, you know if they do and what we found is that, they're making the jump. And you see it here with everyone behind us, in front of us, like they are learning. >> Your community said we're all in. Well I'm interested in, we've seen other large organizations infrastructure companies try to attract developers like this, I'm wondering is it because of the network, is it because of Cisco? Are there some other ingredients that you could buy, is it the certified engineers who have this appetite? Why is it that Cisco has been so successful, and I can name a number of other companies that have tried and failed, some of them even owned clouds, and have really not been able to get traction with developers, why Cisco? >> Well I mean, I think we've been fortunate in many ways, as we've been building it out but I think part of it, you know like the way any company would have to go about you know, kind of taking on programmability, dev-ops, you know, these types of models, is tough, and it's, there's not one formula for how you do it, but in our case, it was that Cisco had a very loyal community. Or we have, and we appreciate that very loyal community 'cause they are out there, workin' the gear, building the networks like, running train stations, transportation systems you know, running all around the world, and so, and they've had to invest a lot into that knowledge. Now we then, gave them the tools to learn, we said, here's coding 101, here's your APIs, here's how to learn about it, and your first API call will be get network devices. Here's how you automate your infrastructure, here's how you do your things, and because we put it in, they're grabbing on and they're doing it and you know, so, it was kind of having that base community and being respectful of it and yet, bringing them along, pushing them. Like we don't say keep doing things the old way yes, learn software, and we're not going to water down how you have to learn software. Like you're going to get in there, you're going to use Rest APIs, you're going to use Postman, you're going to use Git, and we have that kind of like first track to just get 'em using those tools. And we also don't take an elitist culture like we're very welcoming of it, and respectful of what they've done and like, just teach 'em and let 'em go. And the thing is like, once you do it, like once you spend your time and you go oh, okay, so you get the code from GitHub, I got it, now I see all this other stuff. Now I made my Rest API call and I've used Postman. Oh, I get it, it's a tool. Just, once you've done just that, you are a different person. >> And then it's business impact. >> Then it's business, yeah no and like then you're also able to experiment, like you suddenly see a bigger world. 'Cause you've been responsible for this one thing, but now you see the bigger world and you think differently, and then it's business impact, because then you're like okay, how do I modernize my infrastructure? How can I just automate this task that I do every day? I'm like, I don't want to do that anymore, I want to automate it, let me do this. And once you get that mindset, then you're doing more, and then you're saying wait, now can I install applications on this, boy, my network and my infrastructure can gives lots of business insights. So I can start to get information about what applications are being called, what are being used, you know, when you have retail operations you can say, oh, what's happening in this store versus that store? When you have a transportation system, where are we most busy? When you're doing banking, where is like, are you having mobile transactions or in-store transactions? There's all this stuff you learn and then suddenly, you can, you know, really create the applications that-- >> So they get the bug, they get inspired they stand up some quick sandbox with some value and go wow-- >> Or they use our DevNet Sandbox so that they can start stuff and get experi-- >> It's a cloud kind of mindset of standing something up and saying look at it, wow, I can do this, I can be more contributing to the organization. Talk about the modernization, I want to get kind of the next step for you 'cause the next level for you is what? Because if this continues, you're going to start to see enterprises saying oh, I can play in the cloud, I can use microservices. >> Yes. >> I can tap into that agility and scale of the cloud, and leverage my resources and my investment I have now to compete, you just mentioned that. How is that going to work, take us through that. >> Yeah and there's more, in addition to that, is also, I can also leverage the ecosystem, right? 'Cause you're used to doing everything yourself, but you're not going to win by doing everything yourself, even if you made everything modern, right? You still need to use the ecosystem as well. But you know, but then at that stage what you can do and actually we're seeing this as, like our developers are not only the infrastructure folks, but now, all of the sudden our ISVs, app developers, who are out there writing apps, are able to actually put stuff into the infrastructure, so we actually had some IoT announcements this week, where we have these industrial routers that are coming out, and you can take an industrial router and put it into a police car and because a police car has a dashboard camera, it has a WiFi system, it has on-board computer, tablets, like all of this stuff, the officer has stuff, that's a mobile office. And it has a gateway in it. Well now, the gateway that we put in there does app hosting, it can host containerized applications. So then if you take a look at it, all the police cars that are moving around are basically hosting containerized apps, you have this kind of system, and Cisco makes that. >> In a moveable edge. >> And then we have the gateway manager that does it, and if you take a look at what does the gateway manager do it has to manage all of those devices, you know, and then it can also deploy applications. So we have an ability to now manage, we also have an ability to deploy containers, pull back containers, and then this also works in manufacturing, it works in utility, so you have a substation, you have these industrial routers out there that can host apps, you know, then all of a sudden edge computing becomes real. But what this brings together is that now, you can actually get ISVs who can actually now say, hey I'm an app developer, I wanted to write an app, I have one that could be used in manufacturing. I could never do it before, but oh, there's this platform, now I can do it, and I don't have to start installing routers, like a Cisco partner will do it for a customer, and I can just drop my app in and it's, we're actually seeing that now-- >> So basically what's happening, the nirvana is first of all, intelligent edge is actually possible. >> Yes. >> With having the power at the edge with APIs, but for the ISVs, they might have the domain expertise at saying, hey I'm an expert on police, fire, public safety, vertical. >> Yes. >> But, I could build the best app, but I don't need to do all this other stuff. >> Yes. >> So I can focus all my attention on this. >> Yes. >> And their bottleneck was having that kind of compute and or Edge device. >> Yes. >> Is that what you're kind of getting at? >> Yeah, and there's, exactly it was because you know, I mean an app developer is awesome at writing apps. They don't want to get into the business of deploying networks and like even managing and operating how that is, but there's a whole like kind of Cisco ecosystem that does that. Like we have a lot of people who will love to operationalize that system, deploy that, you know, kind of maintain it. Then there's IT and OT operators who are running that stuff, but that app developer can write their app drop it into there, and then all of that can be taken care of. And we actually have two ISVs here with us, one in manufacturing, one in utilities, who are, you know, DevNet ISV partners, they've written applications and they actually have real stories about this, and kind of what they had to say is, like in the manufacturing example, is okay, so they write, they have this innovation, I wrote this cool app for manufacturing, right? So there's something that it does, it's building it, you know, they've gotten expertise in that, and then, as they've been, they're doing something innovative, they actually need the end customer, who does, the manufacturer, to use it, and adopt a new technology. Well, hey, you know, I'm running my stuff, why should I use that, how would I? So they actually work with a systems integrator, like a channel partner that actually will customize the solution. But even that person may not have thought about edge computing, what can you do, what's this crazy idea you have, but now they've actually gotten trained up, they're getting trained up on our IoT technologies, they're getting trained up on how to operationalize it, and this guy just writes his app, he actually points them to the DevNet Sandbox to learn about it, so he's like, no let me show you how this Edge processing thing works, go use the DevNet Sandbox, you can spin up your instance, you can see it working, oh look there's these APIs, let me show you. And it turns out they're using the Sandbox to actually train the partners and the end customer about what this model is like. And then, these guys are adopting it, and they're getting paying customers through this. >> Did you start hunting for ISVs, did they find you, how did that all transpire? >> It kind of happens in all different ways. (laughter) >> So yes. >> Yeah yeah, it happens in all different ways, and basically, in some cases like we actually sometimes have innovation centers and then you have you know, kind of as you know, the start-up that's trying to figure out how to get their stuff seen, they show up, we look for it. In our case in Italy, with the manufacturing company, then what happened was, the government was actually investing and the government was actually giving tax subsidies for manufacturing plants to modernize. And so, what they were doing was actually giving an incentive and then looking for these types of partners, so we actually teamed up with our country teams to find some of these and they have a great product. And then we started, you know, working with them. They actually already had an appreciation for Cisco because they, you know, in their country, they did computer science in college, they might've done some networking with the Cisco Networking Academy, so they knew about it, but finally, it came that they could actually bring this ecosystem together. >> Susie, congratulations on all your success, been great to be part of it in our way, but you and your team have done an amazing job, great feedback on Twitter on the swag got the-- (laughter) Swag bag's gettin' a lot of attention, which is always a key important thing. But in general, super important initiative, share some insight into how this has changed Cisco's executive view of the world because, you know, the cloud had horizontal scalability, but Cisco had it too. And now the new positioning, the new branding that Karen Walker and her team are putting out, the bridge to tomorrow, the future, is about almost a horizontally scalable Cisco. It's everywhere now so-- >> Yeah the bridge to possible, yeah. >> Bridge to possible, yes. >> Yeah well I mean, really what happens is, you know, there was a time when you're like, I'm going to buy my security, I'm going to buy my networking, I'm going to buy my data center, but really more and more people just want an infrastructure that works, right? An infrastructure that's capable that can allow you to innovate, and really what happens, when you think about how do you put all of these systems together, 'cause they're still individual, and they need to be individual in best in class products, well the best way to put 'em together is with APIs. (laughs) So, it's not that you need to architect them all into one big product, it's actually better to have best in class, clearly define the APIs, and then allow, as kind of modularity and to build it out. So, really we've had tremendous support from Chuck Robbins, our CEO, and he's understood this vision and he's been helping, kind of, you know, like DevNet is a start-up itself, like he's been helping us navigate the waters to really make it happen and as we moved and as he's evolved the organization, we've actually started to get more and more support from our executives and we're working across the team, so everything that we do is together with all the teams. And now what we're doing is we're co-launching products. Every time we launch a new product, we launch a new product with the product offer and the developer offer. >> Yeah. >> So, you know, here we've launched the new IoT products. >> With APIs. >> And, with APIs, and IOX and App-posting capabilities and we launched them together with a new DevNet IoT developer center. At developer.cisco.com/iot, and this is actually, if you take a look at the last say half year or year, our products have been launching, you'll see, oh here's the new DNA Center, and here's the new DevNet developer center. You know, then we can say, here's the new kind of ACI, and here's the new ACI developer center. Here's the new Meraki feature, here's the new ACI-- >> And it's no secret that DNA Center has over 600 people engineers in there. >> Yeah (laughs) >> That public information might not be-- >> You know, but we've actually gotten in the mode in the understanding of you know, every product should have a developer offer because it's about the ecosystem, and we're getting tremendous support now. >> Yeah a lot of people ask me about Amazon Web Services 'cause we're so close, we cover them deeply. They always ask me, hey John, why is that, why is Amazon so successful I go, well they got a great management team, they've got a great business model, but it was built on APIs first. It was a web service framework. You guys have been very smart by betting on the API because that's where the growth is, so it's not Amazon being the cloud, it's the fact that they built building blocks with APIs, that grew. >> Yes. >> And so I think what you've got here, that's lightening in the bottle is, having an API strategy creates more connections, connections create more fabric, and then there's more data, it's just, it's a great growth vehicle. >> Absolutely. >> So, congratulations. >> Thank you. >> So is that your market place, do you have a market place so it's just, I guess SDKs and APIs and now that you have ISVs comin' in, is that sort of in the plan? >> We do, no we do actually so, so yeah so basically, when you're in this world, then you have your device, you know, it's your phone, and then you have apps that you download and you get it from an app store. But when we're talking about, you know, the types of solutions we're talking about, there is infrastructure, there is infrastructure for you know, again, utilities companies, for police stations, for retail stores, and then, you have ISV applications that can help in each of those domains. There's oftentimes a systems integrator that's putting something together for a customer. And so now kind of the app store for this type of thing actually involves, you know, our infrastructure products together with kind of, and infrastructure, and third-party ones, you know, ISV software that can be customized and have innovation in different ways together with that system integrator and we're training them all, people across that, but we actually have something called DevNet Exchange. And what we've done is there's actually two parts, there's Code Exchange, which is basically, pointers out to you know, source code that's out in GitHub, so we're just going out to code repos that are actually helping people get started with different products. But in addition, we have Ecosystem Exchange, which actually lists the ISV solutions that can be used as well as the system's integrators who can actually deliver solutions in these different domains, so you know, DevNet Ecosystem Exchange is the place where we actually do list the ISVs with the SIs you know, with the different platforms so, that's the app store for a programmable infrastructure. >> Susie, congratulations again, thank you so much for including us in your DevNet Zone with theCUBE here for three days. >> Thank you for coming to us and for really helping us tell the story. >> It' a great story to tell and it's kickin' butt and takin' names-- (laughter) Susie Wee, Senior Vice President and CTO of DevNet, makin' it happen just the beginning, scratching the surface of the explosion of API-based economies, around the network, the network value, and certainly cloud and IoT. Of course, we're bringing you the edge of the network here with theCUBE, in Barcelona, we'll be back with more live coverage day two, after this short break. (upbeat music)

Published Date : Jan 30 2019

SUMMARY :

brought to you by Cisco, and its ecosystem partners. with the leader of the DevNet team of Cisco, that we have you here again in the DevNet Zone. Feature of the show, the size gets bigger every year. the community's getting along and, you know, Yeah you know, you nailed it, This is the new DNA of Cisco is APIs, But you know, you don't take your network in overnight, and the security aspect of what the network can do, and you know, what you're doing in energy. So a big challenge there of what you just mentioned, it's the community, it's the world of like, to work in oil and gas, you need some expertise in that is it because of the network, is it because of Cisco? and they're doing it and you know, so, and then suddenly, you can, you know, kind of the next step for you 'cause I have now to compete, you just mentioned that. So then if you take a look at it, it has to manage all of those devices, you know, the nirvana is first of all, intelligent edge but for the ISVs, they might have But, I could build the best app, And their bottleneck was having that it's building it, you know, they've gotten It kind of happens in all different ways. And then we started, you know, working with them. because, you know, the cloud had horizontal and he's been helping, kind of, you know, So, you know, here we've launched if you take a look at the last say half year or year, And it's no secret that DNA Center of you know, every product should have it's the fact that they built building blocks and then there's more data, it's just, and then you have apps that you download thank you so much for including us in your DevNet Zone Thank you for coming to us and for really Of course, we're bringing you the edge of the network here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Chuck RobbinsPERSON

0.99+

Susie WeePERSON

0.99+

CiscoORGANIZATION

0.99+

Karen WalkerPERSON

0.99+

JohnPERSON

0.99+

SusiePERSON

0.99+

Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

BarcelonaLOCATION

0.99+

ItalyLOCATION

0.99+

Cisco Networking AcademyORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

three daysQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

developer.cisco.com/iotOTHER

0.99+

two partsQUANTITY

0.99+

DevNet ExchangeTITLE

0.98+

GitTITLE

0.98+

DevNetORGANIZATION

0.98+

over 600 peopleQUANTITY

0.98+

Cisco DevNetORGANIZATION

0.98+

first trackQUANTITY

0.98+

oneQUANTITY

0.98+

DNA CenterORGANIZATION

0.98+

this weekDATE

0.98+

first conversationQUANTITY

0.98+

five years agoDATE

0.97+

DevNetTITLE

0.96+

day twoQUANTITY

0.96+

PostmanTITLE

0.95+

DevNet Ecosystem ExchangeTITLE

0.95+

Ecosystem ExchangeTITLE

0.95+

ACIORGANIZATION

0.95+

TwitterORGANIZATION

0.94+

CTOPERSON

0.93+

Code ExchangeTITLE

0.93+

eachQUANTITY

0.93+

firstQUANTITY

0.91+

tomorrowDATE

0.9+

Senior Vice PresidentPERSON

0.87+

DevNet SandboxTITLE

0.87+

all weekQUANTITY

0.84+

couple of years agoDATE

0.83+

two ISVsQUANTITY

0.81+

Cisco Live! Europe 2019EVENT

0.78+

theCUBEORGANIZATION

0.75+

EdgeTITLE

0.72+

every yearQUANTITY

0.71+

first APIQUANTITY

0.7+

IOXTITLE

0.7+

MerakiORGANIZATION

0.7+

Liz Centoni, Cisco | Cisco Live EU 2019


 

>> Live, from Barcelona, Spain, it's theCUBE! Covering Cisco Live! Europe brought to you by Cisco and its ecosystem partners. >> Welcome back everyone, live here, in Barcelona, Spain, it's theCUBE's coverage of Cisco Live! Europe 2019, I'm John Furrier my co-host Dave Vellante, our next guest is Liz Centoni, Senior Vice President, General Manager of the IoT group at Cisco, formerly as part of the engineering team, Cube alumni, great to see you again, thanks for coming on! >> Great to be here, always good to see you guys. >> So you're in the center of a lot of news, IOT, edge of the network, redefining networking on stage, we heard that, talk about your role in the organization of Cisco and the products that you now have and what's goin' on here. >> So I run our IoT Business group. Similar to what we do with EN, data center, all of that, it has the engineering team, product management team, we build products, solutions, that includes hardware, software, silicon, take 'em out to market, really in IoT it's about you know, the technology conversation comes second. It's like, what can you deliver in terms of use case, and business outcomes that comes first. And it's more about what technology can enable that, so the conversations we have with customers are around, how can you really solve my kind of real problems. Everything from, I want to grow my top line, I want to get closer to my customers, because the closer I get to my customers, I know them better, so obviously, I can turn around and grow my top line. And I want to optimize everything from internal process to external process, because just improves my bottom line at the end of the day. >> So a lot of news happening here around your team, but first, talk about redefining networking in context to your part because, edge of the network has always been, what is, you know, edge of the network, now it's extending further, IoT is one of those things that people are looking at from a digitization standpoint, turning on more intelligence, with a factory floor or other areas, how is IoT changing and what is it today? >> So you gave an example of you know, digitizing something like a factory floor. Right, so let's talk about that. So what do customers on the factory floor want to do? They've already automated a number of this factory floors, but what they want to do is get more efficient. They want better EL, they want better quality. They want to bring security all the way down to the plant floor, 'cause the more and more you connect things, the more you've just expanded your threat surface out pretty significantly. So they want to bring security down to the plant floor because these are environments that are not brand new they had brown field equipment they had green field equipment. They want to be able to have control over what device gets on the network with things like device profiling. They want to be able to do things like, create zones so that they can do that with things like network segmentation so when and if an attack does happen, they can contain the attack as much as possible alright? Now, what you need in terms of a factory floor, automation, security, to be able to scale, to have that flexibility, that's no different than what you have in the enterprise already. I mean we've been working with our IT and enterprise customers for years, and you know, they, it's about automation and security, it's about simplicity. Why not extend that out, the talent that IT has, the capability that it has, it really is a connective tissue that you're extending your network from that carpeted space, or your clean space into outside of the office, or into the non-carpeted space so it's perfect in terms of saying, it's about extending the network into the non-traditional space that probably IT doesn't go into today. >> Well right and it's a new constituency, right? So, how are you sort of forging new relationships, new partnerships, what is, describe what that's like, with the operations technology folks. >> I mean at Cisco, we have great partnerships with the IT organization, right? I mean we've got more than 840,000 customers and our sales teams, our product teams do a good job in terms of listening to customers. We're talking more and more to the line of business, we're talking more and more to the operational teams. Because at the end of the day, I want to be candid. You know, going to a manufacturing floor, I've never run a plant floor, right? There are not very many people in the team who can say, I've been a plant manager before. They know their processes, they're concerned about 24/7 operation, hey I want to be in compliance with the fire marshal. Physical safety of my workers. We come in with that IP knowledge, that security knowledge that they need. It's a partnership, I mean people talk about IT and OT convergence, usually, convergence means that, mm, somebody's going to lose their job, this is more and IT and OT partnership. And most of these digitization efforts, usually come in for the CIO level or a chief digitization officer, we've got good relationships there already. The second part is, Cisco's been in this for quite some time our teams already have relationships at the plant level at the grid level, operator level, you know, in the oil and gas area, but we need to build more and more of that. Because building more and more of that is really understanding what business problems are they looking to solve? Then we can bring the technology to it. >> Liz, what's that in the enablement, you mentioned partnership, 'cause that's a good point, 'cause people think, oh, someone wins, someone loses, the partnership is you're enabling, you're bringing new capability into the physical world, you know, from wind farms to whatever. What does the enablement look like, what are some of the things that happen when you guys come into these environments that are being redefined and re-imagined or for the first time? >> Yeah I would say, you know, I'd use what our customer said this morning. And what he said was, IT has the skills that I need alright? They have the IP skills, they have the security skills. These are all the things that I need. I want my guys to focus on kind of business processes. Around things that they know best. And so, we're working with IT as part of what we're putting this extended enterprise, extending Intent-Based Networking to the IoT edge means, IT already knows our tools, our capabilities, we're now saying, we can extend that, let's go out, figure out what those use cases are together, this is why we're working with, not just the IT, we're working with our channel partners as well, who can enable these implementations on IoT implementations work well. Part of this is also a constant, you know, learning from each other. We learned from the operational teams is that, hey you can start a proof of concept really well, but you can't really take it to deployment unless you address things around the complexity, the scale and the security, that's where we can come in and help. >> And you can't just come in and throw your switches and routers over the fence and say, okay, here you go. You have to develop specific solutions for this world right? And can you talk about that a little bit? And tell us what you're doing here? >> Absolutely, so, if you look at the networking, industrial networking portfolio that we have, it's built on the same catalyst, ISR, wireless APs or firewall, but they're more customized for this non-carpeted space, right? You've got to take into consideration that these are not sitting in a controlled environment. So, we test them for temperature, for shock, for vibration, but it's also built on the same software, so we're talking about the same software platform, you get the same automation features, you get the same analytics features, it's managed by DNA center, so, even though we're customizing the hardware for this environment, the software platform that you get, is pretty much the same, so IT can come in and manage both those environments, but IT also needs an understanding of what's the operational team looking to solve for? >> Liz, I want to ask you about the psychology of the buyer in this market. Because OT, they're running stuff that's just turnin' on, put in the lightbulb, make it work, what I got to deploy something? So their kind of expectations might be different, can you share what the expectations are, for the kind of experience that they want to have with that? >> I use utility as a great example. And our customer from Ennogie, I think explained this really well. This is thing that we learned from our customers right? I haven't been in a sub station, I've been in a data center multiple times, but I haven't been in a sub station, so when they're talking about automating sub station, we work with customers, we've been doing this over the last 10 years, we've been working with that Ennogie team for the last two years, they taught us really, how they secure and manage in these environments. You're not going to find a CCIE in this environment. So when you want to send somebody out to like 60,000 sub stations, and you want to check on, hey do I still have VPN connectivity? They're not going to be able to troubleshoot it. What we did is based on the customer's ask, put a green light on their LED that shines green, all the technician does is look at it and says, it's okay. If not, they call back in terms of troubleshooting it. It was just a simple example of where, it's different in terms of how they secure and manage and the talent that they have is different than what's in the IT space, so you've got to make sure that your products also cover what the operational teams need, because you're not dealing with the CCIE or the IP expert. >> So it's the classic market fit, product market fit for what they're expecting. >> Correct. >> LEDs, you can't go wrong with a green light, I mean. (laughter) >> You know, everybody goes, that's such an easy thing, it's like well, it was not that perceptive to us. >> What's the biggest thing you've learned as you've moved from Cisco engineering out to the new frontier on the edge here, what are the learnings that you've seen, obviously growing mark early, it's only going to get large and more complicated, more automation, more AI, more things, what's your learnings, what have you seen so far that's a takeaway? >> So I'll say, I'm still in Cisco engineering. The reason we're in IoT is that, a secure and reliable network, that's the foundation of any IoT deployment alright? You can go out and buy the best sensor, buy the best application buy the best middleware, but if you don't have that foundation, that's secure and reliable, those IoT projects are not going to take off, so it's pretty simple, everyone's network is the enabler of their business outcome, and that's why we're in it. So this is really about extending that network out, but at the same time, understanding what are we looking to solve for, right? So in many cases, we work with third party partners, 'cause some of them know these domains much better than we do, but we know the IP, we are the IP and the security experts, and we bring that to the table better than anybody else. >> And over the top, DevNet showing here for the second year that we've covered it, here in DevNet zone, that when you have that secure network that's programmable, really cool things can develop on top of it, that's a great opportunity. >> Yeah, this is, I'm super excited that we now have an IoT DevNet. You know, as part of our entire Cisco DevNet. Half a million dev-opers you know, Susie Wee and team done a fabulous job. There's more and more dev-opers going to be starting to develop at the IoT edge, at the edge of the network, right? So when you look at that as, our platforms today with IRX on top of it, make this a software platform that dev-opers can actually build applications to, it's really about, you know, we're ready, ISVs and dev-opers unleashing those applications at the IoT edge. And with Susie making that, you know, available in terms of the tools, the resources, the sandbox that you can get, it's like, we expect to see more and more dev-opers building those applications at the edge. >> We got to talk about your announcements, right, so. >> Oh yeah, exciting set of announcements. >> What's the hard news? >> So we launched four things today as part of extending IBN, or Intent-Based Networking to the IoT edge, the first one is, we've got three new Cisco-validated designs. So think of a validated design as enabling our customers to actually accelerate their deployments, so our engineering teams try to mimic, as much as possible, a customer's environment. And they do this pre-integration, pre-testing of our products, third party products. And we actually put 'em out by industry. So we have three new ones out there for manufacturing, for utilities, and for remote and mobile assets, that's one. The second one is we're launching two new hardware platforms, a next-generation catalyst industrial ethernet switch, it's for modularity of interfaces, and it's got nine expansion packs. The idea is, make it as flexible as possible for a customer's deployment. Because these boxes might sit in an environment not just for three years like in a campus, they could sit there for five, for seven, for 10 years. So as you know, they, adding on, giving them that flexibility, they can be a base system and just change the expansion modules, we also launched our next-generation industrial router. It actually is the industry's probably first and only full IPV Six-capable industrial router. And it's got, again flexibility of interfaces, we have LTE, we have fiber, we have copper, you want dual LTE you can actually slap an expansion pack right on top of it. When 5G comes in, you just take the LTE module out, you put 5G, so it's 5G ready. >> Expansions on there. >> And it's based on IOSXC, it's managed by DNA Center, and it's edge-enabled, so they run IOX, you can build your applications, and load 'em on. So we can build 'em, third parties can build 'em. >> And the DevNet piece here as well. >> And the DevNet piece is the third one where we now have, you know, an IoT dev-oper center in the DevNet zone, so with all the tools that are available, it enables dev-opers and ISVs to actually build on top of IOX today. In fact, we actually have more than a couple of three examples that are already doing that. And the fourth thing is, we depend on a large ecosystem of channel partners, so we've launched an IoT specialization training program to enable them to actually help our customers' implementation go faster. >> Mhm. >> So those are the four things that we brought together. The key thing for us was, designing these for scale, flexibility, and security. >> And are these capabilities available today is that right? >> Absolutely, in fact, if you go in, we're shipping in two weeks! And you can see them at the innovation showcase, it's actually very cool. >> I was going to mention, you brought up the ecosystem, glad you brought that up, I was going to ask about how that's developing, I could only imagine new sets of names coming out of the industry in terms of building on these IOTs since this demand for IOT, it's an emerging market in terms of newness, with a lot of head room, so what's the ecosystem look like, is there a pattern, is it ISVs, VARs, does it take the shape of the classic ecosystem or is it a new set of characters or, what's the makeup of the ecosystem? >> Yeah, it's I would say it's, in many ways, if you've been in the IoT world for some time, you'll say, you know, it's not like there's a whole new set of characters. Yes, you have more cloud players in there, you probably have more SIs in there, but it's been like, the distributors are in there, the machine-builders, the OT platforms, these are folks who've been doing this for a long time. It's more around, how do you partner, and where do you monetize? We know where you know, the value we bring in, we rely on, we work very closely with those OT partners, machine-builders, SIs, the cloud partners, to go to market and deliver this. You're right, the market's going to evolve, because the whole new conversation is around data. What do I collect, what I compute at the edge? Where do I route it to, should I take it to my on-premise's data centers, should I take it to the cloud? Who gets control over that data, how do I make sure that I have control over the data as the customer, and I have control over who gets to see it. So I think this will be a evolving conversation. This is something we're enabling with one of our Kinetic platforms, which are not launched, it's already launched in terms of enabling customers to have control over the data and manage the data as well. >> And bringing all the portfolio of Cisco security analytics, management to the table, that puts anything in the world that has power and connectivity to be a device to connect into a system, this is the, I mean how obvious can it be? It's going to be huge! >> It's great that you think it's obvious, that's exactly what we're tryin' to tell our customers-- >> How to do it-- >> Well this is about extending this out. >> Yeah, how do we do it's the playbook right? So, each business has its own unique, there's no general purpose IoT is there? >> Correct. >> It's pretty much on a custom custom-- well thanks for coming on Liz, appreciate it. Want to ask you one final question. You know, I was really impressed with Karen had a great session, Karen Walker had a great session yesterday, impact with women, we interviewed you at Grave Hopper in 2015. Cisco's doing amazing work, can you take a minute to talk about some of the things that Cisco's doing around women in computing, women in STEM, just great momentum, great success story and great leadership. >> I would say look at our leadership at Chuck's level, and I think that's a great example in terms of, he brings people on depending on what they can, what they bring to the table, right? They just happen to be a lot of women out there, and the reality is, I work for a company that believes in inclusion, whether it's gender, race, different experiences, different thoughts, different perspectives because, that's where truly, in terms of, you can bring in the culture that drives that innovation. I've been sponsoring our Women in Science and Engineering for I can't remember, the last four or five years. It's a community that continues to grow. And, the reality is, we don't sit in there and talk about, you know, woe is me, and all the things that are happening, what we talk about is, hey what are the cool new technologies that are out there, how do I get my hands on 'em? And yeah, there are, we talk about some things where women are a little reticent and shy to do, so what we learn from other peoples' experiences, many time the guys are very interesting, so what do you sit down there and talk, and I said trust me it's not like, a whining and moaning session, it's more in terms of where we learn from each other. >> Peers talking and sharing ideas-- >> Absolutely. >> Of innovation and building things. >> Yep, and we've got, you know, we look around and we've got a great set of woman leaders throughout the company at every single level in every function. It's great to be there, we continue to sponsor our Grace Hopper, we have some of the biggest presence at Grace Hopper, we do so many other things like connected women within the company. It's just a, I would say, fabulous place to be. >> You guys do a lot of great things for society, great company, great leadership, thank you for doing all of that, it's phenomenal, we love covering it too, so, we'll be at the cloud now today in Silicon Valley, Women in Data Science at Stanford, and among other great things. >> It's definitely a passion of ours. >> Yeah. (talking over each other) >> Awesome, that's great to hear. >> Thanks for coming on, this is theCUBE, live coverage here in Barcelona for Cisco Live! 2018, back with more after this short break, I'm John Furrier with Dave Vellante, be right back. (upbeat music)

Published Date : Jan 29 2019

SUMMARY :

Europe brought to you by Cisco in the organization of Cisco and the products the closer I get to my customers, than what you have in the enterprise already. So, how are you sort of forging new relationships, Because at the end of the day, I want to be candid. the physical world, you know, from wind farms to whatever. They have the IP skills, they have the security skills. And can you talk about that a little bit? the same software platform, you get the same for the kind of experience that they want to have with that? and the talent that they have is different So it's the classic market fit, product market fit LEDs, you can't go wrong with a green light, I mean. it's like well, it was not that perceptive to us. the IP, we are the IP and the security experts, And over the top, DevNet showing here the sandbox that you can get, the expansion modules, we also launched you can build your applications, and load 'em on. And the fourth thing is, we depend on a large ecosystem So those are the four things that we brought together. And you can see them at the innovation showcase, You're right, the market's going to evolve, Want to ask you one final question. And, the reality is, we don't sit in there Yep, and we've got, you know, great company, great leadership, thank you Thanks for coming on, this is theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

KarenPERSON

0.99+

Karen WalkerPERSON

0.99+

fiveQUANTITY

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

2015DATE

0.99+

three yearsQUANTITY

0.99+

EnnogieORGANIZATION

0.99+

LizPERSON

0.99+

SusiePERSON

0.99+

John FurrierPERSON

0.99+

ChuckPERSON

0.99+

Silicon ValleyLOCATION

0.99+

BarcelonaLOCATION

0.99+

sevenQUANTITY

0.99+

Susie WeePERSON

0.99+

second partQUANTITY

0.99+

firstQUANTITY

0.99+

four thingsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

60,000 sub stationsQUANTITY

0.99+

yesterdayDATE

0.99+

Grace HopperORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

third oneQUANTITY

0.99+

two weeksQUANTITY

0.99+

more than 840,000 customersQUANTITY

0.98+

one final questionQUANTITY

0.98+

bothQUANTITY

0.98+

DNA CenterORGANIZATION

0.98+

IOSXCTITLE

0.98+

Grave HopperORGANIZATION

0.98+

CubeORGANIZATION

0.98+

Half a millionQUANTITY

0.97+

first oneQUANTITY

0.97+

todayDATE

0.97+

fourth thingQUANTITY

0.97+

second oneQUANTITY

0.96+

second yearQUANTITY

0.96+

IOXTITLE

0.95+

secondQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

first timeQUANTITY

0.94+

each businessQUANTITY

0.93+

oneQUANTITY

0.93+

threeQUANTITY

0.9+

two new hardware platformsQUANTITY

0.9+

StanfordLOCATION

0.88+

more than a coupleQUANTITY

0.87+

DevNetTITLE

0.87+

three examplesQUANTITY

0.86+

Senior Vice PresidentPERSON

0.86+

last two yearsDATE

0.86+

nine expansion packsQUANTITY

0.85+

this morningDATE

0.85+

5GQUANTITY

0.83+

Cisco Live! 2018EVENT

0.81+

lastDATE

0.8+

DevNetORGANIZATION

0.79+

three new onesQUANTITY

0.79+

last 10 yearsDATE

0.74+

Six-capableQUANTITY

0.71+