Image Title

Search Results for Astra:

Paola Peraza Calderon & Viraj Parekh, Astronomer | Cube Conversation


 

(soft electronic music) >> Hey everyone, welcome to this CUBE conversation as part of the AWS Startup Showcase, season three, episode one, featuring Astronomer. I'm your host, Lisa Martin. I'm in the CUBE's Palo Alto Studios, and today excited to be joined by a couple of guests, a couple of co-founders from Astronomer. Viraj Parekh is with us, as is Paola Peraza-Calderon. Thanks guys so much for joining us. Excited to dig into Astronomer. >> Thank you so much for having us. >> Yeah, thanks for having us. >> Yeah, and we're going to be talking about the role of data orchestration. Paola, let's go ahead and start with you. Give the audience that understanding, that context about Astronomer and what it is that you guys do. >> Mm-hmm. Yeah, absolutely. So, Astronomer is a, you know, we're a technology and software company for modern data orchestration, as you said, and we're the driving force behind Apache Airflow. The Open Source Workflow Management tool that's since been adopted by thousands and thousands of users, and we'll dig into this a little bit more. But, by data orchestration, we mean data pipeline, so generally speaking, getting data from one place to another, transforming it, running it on a schedule, and overall just building a central system that tangibly connects your entire ecosystem of data services, right. So what, that's Redshift, Snowflake, DVT, et cetera. And so tangibly, we build, we at Astronomer here build products powered by Apache Airflow for data teams and for data practitioners, so that they don't have to. So, we sell to data engineers, data scientists, data admins, and we really spend our time doing three things. So, the first is that we build Astro, our flagship cloud service that we'll talk more on. But here, we're really building experiences that make it easier for data practitioners to author, run, and scale their data pipeline footprint on the cloud. And then, we also contribute to Apache Airflow as an open source project and community. So, we cultivate the community of humans, and we also put out open source developer tools that actually make it easier for individual data practitioners to be productive in their day-to-day jobs, whether or not they actually use our product and and pay us money or not. And then of course, we also have professional services and education and all of these things around our commercial products that enable folks to use our products and use Airflow as effectively as possible. So yeah, super, super happy with everything we've done and hopefully that gives you an idea of where we're starting. >> Awesome, so when you're talking with those, Paola, those data engineers, those data scientists, how do you define data orchestration and what does it mean to them? >> Yeah, yeah, it's a good question. So, you know, if you Google data orchestration you're going to get something about an automated process for organizing silo data and making it accessible for processing and analysis. But, to your question, what does that actually mean, you know? So, if you look at it from a customer's perspective, we can share a little bit about how we at Astronomer actually do data orchestration ourselves and the problems that it solves for us. So, as many other companies out in the world do, we at Astronomer need to monitor how our own customers use our products, right? And so, we have a weekly meeting, for example, that goes through a dashboard and a dashboarding tool called Sigma where we see the number of monthly customers and how they're engaging with our product. But, to actually do that, you know, we have to use data from our application database, for example, that has behavioral data on what they're actually doing in our product. We also have data from third party API tools, like Salesforce and HubSpot, and other ways in which our customer, we actually engage with our customers and their behavior. And so, our data team internally at Astronomer uses a bunch of tools to transform and use that data, right? So, we use FiveTran, for example, to ingest. We use Snowflake as our data warehouse. We use other tools for data transformations. And even, if we at Astronomer don't do this, you can imagine a data team also using tools like, Monte Carlo for data quality, or Hightouch for Reverse ETL, or things like that. And, I think the point here is that data teams, you know, that are building data-driven organizations have a plethora of tooling to both ingest the right data and come up with the right interfaces to transform and actually, interact with that data. And so, that movement and sort of synchronization of data across your ecosystem is exactly what data orchestration is responsible for. Historically, I think, and Raj will talk more about this, historically, schedulers like KRON and Oozie or Control-M have taken a role here, but we think that Apache Airflow has sort of risen over the past few years as the defacto industry standard for writing data pipelines that do tasks, that do data jobs that interact with that ecosystem of tools in your organization. And so, beyond that sort of data pipeline unit, I think where we see it is that data acquisition is not only writing those data pipelines that move your data, but it's also all the things around it, right, so, CI/CD tool and Secrets Management, et cetera. So, a long-winded answer here, but I think that's how we talk about it here at Astronomer and how we're building our products. >> Excellent. Great context, Paola. Thank you. Viraj, let's bring you into the conversation. Every company these days has to be a data company, right? They've got to be a software company- >> Mm-hmm. >> whether it's my bank or my grocery store. So, how are companies actually doing data orchestration today, Viraj? >> Yeah, it's a great question. So, I think one thing to think about is like, on one hand, you know, data orchestration is kind of a new category that we're helping define, but on the other hand, it's something that companies have been doing forever, right? You need to get data moving to use it, you know. You've got it all in place, aggregate it, cleaning it, et cetera. So, when you look at what companies out there are doing, right. Sometimes, if you're a more kind of born in the cloud company, as we say, you'll adopt all these cloud native tooling things your cloud provider gives you. If you're a bank or another sort of institution like that, you know, you're probably juggling an even wider variety of tools. You're thinking about a cloud migration. You might have things like Kron running in one place, Uzi running somewhere else, Informatics running somewhere else, while you're also trying to move all your workloads to the cloud. So, there's quite a large spectrum of what the current state is for companies. And then, kind of like Paola was saying, Apache Airflow started in 2014, and it was actually started by Airbnb, and they put out this blog post that was like, "Hey here's how we use Apache Airflow to orchestrate our data across all their sources." And really since then, right, it's almost been a decade since then, Airflow emerged as the open source standard, and there's companies of all sorts using it. And, it's really used to tie all these tools together, especially as that number of tools increases, companies move to hybrid cloud, hybrid multi-cloud strategies, and so on and so forth. But you know, what we found is that if you go to any company, especially a larger one and you say like, "Hey, how are you doing data orchestration?" They'll probably say something like, "Well, I have five data teams, so I have eight different ways I do data orchestration." Right. This idea of data orchestration's been there but the right way to do it, kind of all the abstractions you need, the way your teams need to work together, and so on and so forth, hasn't really emerged just yet, right? It's such a quick moving space that companies have to combine what they were doing before with what their new business initiatives are today. So, you know, what we really believe here at Astronomer is Airflow is the core of how you solve data orchestration for any sort of use case, but it's not everything. You know, it needs a little more. And, that's really where our commercial product, Astro comes in, where we've built, not only the most tried and tested airflow experience out there. We do employ a majority of the Airflow Core Committers, right? So, we're kind of really deep in the project. We've also built the right things around developer tooling, observability, and reliability for customers to really rely on Astro as the heart of the way they do data orchestration, and kind of think of it as the foundational layer that helps tie together all the different tools, practices and teams large companies have to do today. >> That foundational layer is absolutely critical. You've both mentioned open source software. Paola, I want to go back to you, and just give the audience an understanding of how open source really plays into Astronomer's mission as a company, and into the technologies like Astro. >> Mm-hmm. Yeah, absolutely. I mean, we, so we at Astronomers started using Airflow and actually building our products because Airflow is open source and we were our own customers at the beginning of our company journey. And, I think the open source community is at the core of everything we do. You know, without that open source community and culture, I think, you know, we have less of a business, and so, we're super invested in continuing to cultivate and grow that. And, I think there's a couple sort of concrete ways in which we do this that personally make me really excited to do my own job. You know, for one, we do things like we organize meetups and we sponsor the Airflow Summit and there's these sort of baseline community efforts that I think are really important and that reminds you, hey, there just humans trying to do their jobs and learn and use both our technology and things that are out there and contribute to it. So, making it easier to contribute to Airflow, for example, is another one of our efforts. As Viraj mentioned, we also employ, you know, engineers internally who are on our team whose full-time job is to make the open source project better. Again, regardless of whether or not you're a customer of ours or not, we want to make sure that we continue to cultivate the Airflow project in and of itself. And, we're also building developer tooling that might not be a part of the Apache Open Source project, but is still open source. So, we have repositories in our own sort of GitHub organization, for example, with tools that individual data practitioners, again customers are not, can use to make them be more productive in their day-to-day jobs with Airflow writing Dags for the most common use cases out there. The last thing I'll say is how important I think we've found it to build sort of educational resources and documentation and best practices. Airflow can be complex. It's been around for a long time. There's a lot of really, really rich feature sets. And so, how do we enable folks to actually use those? And that comes in, you know, things like webinars, and best practices, and courses and curriculum that are free and accessible and open to the community are just some of the ways in which I think we're continuing to invest in that open source community over the next year and beyond. >> That's awesome. It sounds like open source is really core, not only to the mission, but really to the heart of the organization. Viraj, I want to go back to you and really try to understand how does Astronomer fit into the wider modern data stack and ecosystem? Like what does that look like for customers? >> Yeah, yeah. So, both in the open source and with our commercial customers, right? Folks everywhere are trying to tie together a huge variety of tools in order to start making sense of their data. And you know, I kind of think of it almost like as like a pyramid, right? At the base level, you need things like data reliability, data, sorry, data freshness, data availability, and so on and so forth, right? You just need your data to be there. (coughs) I'm sorry. You just need your data to be there, and you need to make it predictable when it's going to be there. You need to make sure it's kind of correct at the highest level, some quality checks, and so on and so forth. And oftentimes, that kind of takes the case of ELT or ETL use cases, right? Taking data from somewhere and moving it somewhere else, usually into some sort of analytics destination. And, that's really what businesses can do to just power the core parts of getting insights into how their business is going, right? How much revenue did I had? What's in my pipeline, salesforce, and so on and so forth. Once that kind of base foundation is there and people can get the data they need, how they need it, it really opens up a lot for what customers can do. You know, I think one of the trendier things out there right now is MLOps, and how do companies actually put machine learning into production? Well, when you think about it you kind of have to squint at it, right? Like, machine learning pipelines are really just any other data pipeline. They just have a certain set of needs that might not not be applicable to ELT pipelines. And, when you kind of have a common layer to tie together all the ways data can move through your organization, that's really what we're trying to make it so companies can do. And, that happens in financial services where, you know, we have some customers who take app data coming from their mobile apps, and actually run it through their fraud detection services to make sure that all the activity is not fraudulent. We have customers that will run sports betting models on our platform where they'll take data from a bunch of public APIs around different sporting events that are happening, transform all of that in a way their data scientist can build models with it, and then actually bet on sports based on that output. You know, one of my favorite use cases I like to talk about that we saw in the open source is we had there was one company whose their business was to deliver blood transfusions via drone into remote parts of the world. And, it was really cool because they took all this data from all sorts of places, right? Kind of orchestrated all the aggregation and cleaning and analysis that happened had to happen via airflow and the end product would be a drone being shot out into a real remote part of the world to actually give somebody blood who needed it there. Because it turns out for certain parts of the world, the easiest way to deliver blood to them is via drone and not via some other, some other thing. So, these kind of, all the things people do with the modern data stack is absolutely incredible, right? Like you were saying, every company's trying to be a data-driven company. What really energizes me is knowing that like, for all those best, super great tools out there that power a business, we get to be the connective tissue, or the, almost like the electricity that kind of ropes them all together and makes so people can actually do what they need to do. >> Right. Phenomenal use cases that you just described, Raj. I mean, just the variety alone of what you guys are able to do and impact is so cool. So Paola, when you're with those data engineers, those data scientists, and customer conversations, what's your pitch? Why use Astro? >> Mm-hmm. Yeah, yeah, it's a good question. And honestly, to piggyback off of Viraj, there's so many. I think what keeps me so energized is how mission critical both our product and data orchestration is, and those use cases really are incredible and we work with customers of all shapes and sizes. But, to answer your question, right, so why use Astra? Why use our commercial products? There's so many people using open source, why pay for something more than that? So, you know, the baseline for our business really is that Airflow has grown exponentially over the last five years, and like we said has become an industry standard that we're confident there's a huge opportunity for us as a company and as a team. But, we also strongly believe that being great at running Airflow, you know, doesn't make you a successful company at what you do. What makes you a successful company at what you do is building great products and solving problems and solving pin points of your own customers, right? And, that differentiating value isn't being amazing at running Airflow. That should be our job. And so, we want to abstract those customers from meaning to do things like manage Kubernetes infrastructure that you need to run Airflow, and then hiring someone full-time to go do that. Which can be hard, but again doesn't add differentiating value to your team, or to your product, or to your customers. So, folks to get away from managing that infrastructure sort of a base, a base layer. Folks who are looking for differentiating features that make their team more productive and allows them to spend less time tweaking Airflow configurations and more time working with the data that they're getting from their business. For help, getting, staying up with Airflow releases. There's a ton of, we've actually been pretty quick to come out with new Airflow features and releases, and actually just keeping up with that feature set and working strategically with a partner to help you make the most out of those feature sets is a key part of it. And, really it's, especially if you're an organization who currently is committed to using Airflow, you likely have a lot of Airflow environments across your organization. And, being able to see those Airflow environments in a single place and being able to enable your data practitioners to create Airflow environments with a click of a button, and then use, for example, our command line to develop your Airflow Dags locally and push them up to our product, and use all of the sort of testing and monitoring and observability that we have on top of our product is such a key. It sounds so simple, especially if you use Airflow, but really those things are, you know, baseline value props that we have for the customers that continue to be excited to work with us. And of course, I think we can go beyond that and there's, we have ambitions to add whole, a whole bunch of features and expand into different types of personas. >> Right? >> But really our main value prop is for companies who are committed to Airflow and want to abstract themselves and make use of some of the differentiating features that we now have at Astronomer. >> Got it. Awesome. >> Thank you. One thing, one thing I'll add to that, Paola, and I think you did a good job of saying is because every company's trying to be a data company, companies are at different parts of their journey along that, right? And we want to meet customers where they are, and take them through it to where they want to go. So, on one end you have folks who are like, "Hey, we're just building a data team here. We have a new initiative. We heard about Airflow. How do you help us out?" On the farther end, you know, we have some customers that have been using Airflow for five plus years and they're like, "Hey, this is awesome. We have 10 more teams we want to bring on. How can you help with this? How can we do more stuff in the open source with you? How can we tell our story together?" And, it's all about kind of taking this vast community of data users everywhere, seeing where they're at, and saying like, "Hey, Astro and Airflow can take you to the next place that you want to go." >> Which is incredibly- >> Mm-hmm. >> and you bring up a great point, Viraj, that every company is somewhere in a different place on that journey. And it's, and it's complex. But it sounds to me like a lot of what you're doing is really stripping away a lot of the complexity, really enabling folks to use their data as quickly as possible, so that it's relevant and they can serve up, you know, the right products and services to whoever wants what. Really incredibly important. We're almost out of time, but I'd love to get both of your perspectives on what's next for Astronomer. You give us a a great overview of what the company's doing, the value in it for customers. Paola, from your lens as one of the co-founders, what's next? >> Yeah, I mean, I think we'll continue to, I think cultivate in that open source community. I think we'll continue to build products that are open sourced as part of our ecosystem. I also think that we'll continue to build products that actually make Airflow, and getting started with Airflow, more accessible. So, sort of lowering that barrier to entry to our products, whether that's price wise or infrastructure requirement wise. I think making it easier for folks to get started and get their hands on our product is super important for us this year. And really it's about, I think, you know, for us, it's really about focused execution this year and all of the sort of core principles that we've been talking about. And continuing to invest in all of the things around our product that again, enable teams to use Airflow more effectively and efficiently. >> And that efficiency piece is, everybody needs that. Last question, Viraj, for you. What do you see in terms of the next year for Astronomer and for your role? >> Yeah, you know, I think Paola did a really good job of laying it out. So it's, it's really hard to disagree with her on anything, right? I think executing is definitely the most important thing. My own personal bias on that is I think more than ever it's important to really galvanize the community around airflow. So, we're going to be focusing on that a lot. We want to make it easier for our users to get get our product into their hands, be that open source users or commercial users. And last, but certainly not least, is we're also really excited about Data Lineage and this other open source project in our umbrella called Open Lineage to make it so that there's a standard way for users to get lineage out of different systems that they use. When we think about what's in store for data lineage and needing to audit the way automated decisions are being made. You know, I think that's just such an important thing that companies are really just starting with, and I don't think there's a solution that's emerged that kind of ties it all together. So, we think that as we kind of grow the role of Airflow, right, we can also make it so that we're helping solve, we're helping customers solve their lineage problems all in Astro, which is our kind of the best of both worlds for us. >> Awesome. I can definitely feel and hear the enthusiasm and the passion that you both bring to Astronomer, to your customers, to your team. I love it. We could keep talking more and more, so you're going to have to come back. (laughing) Viraj, Paola, thank you so much for joining me today on this showcase conversation. We really appreciate your insights and all the context that you provided about Astronomer. >> Thank you so much for having us. >> My pleasure. For my guests, I'm Lisa Martin. You're watching this Cube conversation. (soft electronic music)

Published Date : Feb 21 2023

SUMMARY :

to this CUBE conversation Thank you so much and what it is that you guys do. and hopefully that gives you an idea and the problems that it solves for us. to be a data company, right? So, how are companies actually kind of all the abstractions you need, and just give the And that comes in, you of the organization. and analysis that happened that you just described, Raj. that you need to run Airflow, that we now have at Astronomer. Awesome. and I think you did a good job of saying and you bring up a great point, Viraj, and all of the sort of core principles and for your role? and needing to audit the and all the context that you (soft electronic music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Viraj ParekhPERSON

0.99+

Lisa MartinPERSON

0.99+

PaolaPERSON

0.99+

VirajPERSON

0.99+

2014DATE

0.99+

AstronomerORGANIZATION

0.99+

Paola Peraza-CalderonPERSON

0.99+

Paola Peraza CalderonPERSON

0.99+

AirflowORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

five plus yearsQUANTITY

0.99+

AstroORGANIZATION

0.99+

RajPERSON

0.99+

UziORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

KronORGANIZATION

0.99+

10 more teamsQUANTITY

0.98+

AstronomersORGANIZATION

0.98+

AstraORGANIZATION

0.98+

oneQUANTITY

0.98+

AirflowTITLE

0.98+

InformaticsORGANIZATION

0.98+

Monte CarloTITLE

0.98+

this yearDATE

0.98+

HubSpotORGANIZATION

0.98+

one companyQUANTITY

0.97+

AstronomerTITLE

0.97+

next yearDATE

0.97+

ApacheORGANIZATION

0.97+

Airflow SummitEVENT

0.97+

AWSORGANIZATION

0.95+

both worldsQUANTITY

0.93+

KRONORGANIZATION

0.93+

CUBEORGANIZATION

0.92+

MORGANIZATION

0.92+

RedshiftTITLE

0.91+

SnowflakeTITLE

0.91+

five data teamsQUANTITY

0.91+

GitHubORGANIZATION

0.91+

OozieORGANIZATION

0.9+

Data LineageORGANIZATION

0.9+

Phil Brotherton, NetApp | Broadcom’s Acquisition of VMware


 

(upbeat music) >> Hello, this is Dave Vellante, and we're here to talk about the massive $61 billion planned acquisition of VMware by Broadcom. And I'm here with Phil Brotherton of NetApp to discuss the implications for customers, for the industry, and NetApp's particular point of view. Phil, welcome. Good to see you again. >> It's great to see you, Dave. >> So this topic has garnered a lot of conversation. What's your take on this epic event? What does it mean for the industry generally, and customers specifically? >> You know, I think time will tell a little bit, Dave. We're in the early days. We've, you know, so we heard the original announcements and then it's evolved a little bit, as we're going now. I think overall it'll be good for the ecosystem in the end. There's a lot you can do when you start combining what VMware can do with compute and some of the hardware assets of Broadcom. There's a lot of security things that can be brought, for example, to the infrastructure, that are very high-end and cool, and then integrated, so it's easy to do. So I think there's a lot of upside for it. There's obviously a lot of concern about what it means for vendor consolidation and pricing and things like that. So time will tell. >> You know, when this announcement first came out, I wrote a piece, you know, how "Broadcom will tame the VMware beast," I called it. And, you know, looked at Broadcom's history and said they're going to cut, they're going to raise prices, et cetera, et cetera. But I've seen a different tone, certainly, as Broadcom has got into the details. And I'm sure I and others maybe scared a lot of customers, but I think everybody's kind of calming down now. What are you hearing from customers about this acquisition? How are they thinking about it? >> You know, I think it varies. There's, I'd say generally we have like half our installed base, Dave, runs ESX Server, so the bulk of our customers use VMware, and generally they love VMware. And I'm talking mainly on-prem. We're just extending to the cloud now, really, at scale. And there's a lot of interest in continuing to do that, and that's really strong. The piece that's careful is this vendor, the cost issues that have come up. The things that were in your piece, actually. And what does that mean to me, and how do I balance that out? Those are the questions people are dealing with right now. >> Yeah, so there's obviously a lot of talk about the macro, the macro headwinds. Everybody's being a little cautious. The CIOs are tapping the brakes. We all sort of know that story. But we have some data from our partner ETR that ask, they go out every quarter and they survey, you know, 1500 or so IT practitioners, and they ask the ones that are planning to spend less, that are cutting, "How are you going to approach that? What's your primary methodology in terms of achieving, you know, cost optimization?" The number one, by far, answer was to consolidate redundant vendors. It was like, it's now up to about 40%. The second, distant second, was, "We're going to, you know, optimize cloud costs." You know, still significant, but it was really that consolidating the redundant vendors. Do you see that? How does NetApp fit into that? >> Yeah, that is an interesting, that's a very interesting bit of research, Dave. I think it's very right. One thing I would say is, because I've been in the infrastructure business in Silicon Valley now for 30 years. So these ups and downs are, that's a consistent thing in our industry, and I always think people should think of their infrastructure and cost management. That's always an issue, with infrastructure as cost management. What I've told customers forever is that when you look at cost management, our best customers at cost management are typically service providers. There's another aspect to cost management, is you want to automate as much as possible. And automation goes along with vendor consolidation, because how you automate different products, you don't want to have too many vendors in your layers. And what I mean by the layers of ecosystem, there's a storage layer, the network layer, the compute layer, like, the security layer, database layer, et cetera. When you think like that, everybody should pick their partners very carefully, per layer. And one last thought on this is, it's not like people are dumb, and not trying to do this. It's, when you look at what happens in the real world, acquisitions happen, things change as you go. And in these big customers, that's just normal, that things change. But you always have to have this push towards consolidating and picking your vendors very carefully. >> Also, just to follow up on that, I mean, you know, when you think about multi-cloud, and you mentioned, you know, you've got some big customers, they do a lot of M & A, it's kind of been multi-cloud by accident. "Oh, we got all these other tools and storage platforms and whatever it is." So where does NetApp fit in that whole consolidation equation? I'm thinking about, you know, cross-cloud services, which is a big VMware theme, thinking about a consistent experience, on-prem, hybrid, across the three big clouds, out to the edge. Where do you fit? >> So our view has been, and it was this view, and we extend it to the cloud, is that the data layer, so in our software, is called ONTAP, the data layer is a really important layer that provides a lot of efficiency. It only gets bigger, how you do compliance, how you do backup, DR, blah blah blah. All that data layer services needs to operate on-prem and on the clouds. So when you look at what we've done over the years, we've extended to all the clouds, our data layer. We've put controls, management tools, over the top, so that you can manage the entire data layer, on-prem and cloud, as one layer. And we're continuing to head down that path, 'cause we think that data layer is obviously the path to maximum ability to do compliance, maximum cost advantages, et cetera. So we've really been the company that set our sights on managing the data layer. Now, if you look at VMware, go up into the network layer, the compute layer, VMware is a great partner, and that's why we work with them so closely, is they're so perfect a fit for us, and they've been a great partner for 20 years for us, connecting those infrastructural data layers: compute, network, and storage. >> Well, just to stay on that for a second. I've seen recently, you kind of doubled down on your VMware alliance. You've got stuff at re:Invent I saw, with AWS, you're close to Azure, and I'm really talking about ONTAP, which is sort of an extension of what you were just talking about, Phil, which is, you know, it's kind of NetApp's storage operating system, if you will. It's a world class. But so, maybe talk about that relationship a little bit, and how you see it evolving. >> Well, so what we've been seeing consistently is, customers want to use the advantages of the cloud. So, point one. And when you have to completely refactor apps and all this stuff, it limits, it's friction. It limits what you can do, it raises costs. And what we did with VMware, VMware is this great platform for being able to run basically client-server apps on-prem and cloud, the exact same way. The problem is, when you have large data sets in the VMs, there's some cost issues and things, especially on the cloud. That drove us to work together, and do what we did. We GA-ed, we're the, so NetApp is the only independent storage, independent storage, say this right, independent storage platform certified to run with VMware cloud on Amazon. We GA-ed that last summer. We GA-ed with Azure, the Azure VMware service, a couple months ago. And you'll see news coming with GCP soon. And so the idea was, make it easy for customers to basically run in a hybrid model. And then if you back out and go, "What does that mean for you as a customer?", it's not saying you should go to the cloud, necessarily, or stay on-prem, or whatever. But it's giving you the flexibility to cost-optimize where you want to be. And from a data management point of view, ONTAP gives you the consistent data management, whichever way you decide to go. >> Yeah, so I've been following NetApp for decades, when you were Network Appliance, and I saw you go from kind of the workstation space into the enterprise. I saw you lean into virtualization really early on, and you've been a great VMware partner ever since. And you were early in cloud, so, sort of talking about, you know, that cross-cloud, what we call supercloud. I'm interested in what you're seeing in terms of specific actions that customers are taking. Like, I think about ELAs, and I think it's a two-edged sword. You know, should customers, you know, lean into ELAs right now? You know, what are you seeing there? You talked about, you know, sort of modernizing apps with things like Kubernetes, you know, cloud migration. What are some of the techniques that you're advising customers to take in the context of this acquisition? >> You know, so the basics of this are pretty easy. One is, and I think even Raghu, the CEO of VMware, has talked about this. Extending your ELA is probably a good idea. Like I said, customers love VMware, so having a commitment for a time, consistent cost management for a time is a good strategy. And I think that's why you're hearing ELA extensions being discussed. It's a good idea. The second part, and I think it goes to your surveys, that cost optimization point on the cloud is, moving to the cloud has huge advantages, but if you just kind of lift and shift, oftentimes the costs aren't realized the way you'd want. And the term "modernization," changing your app to use more Kubernetes, more cloud-native services, is often a consideration that goes into that. But that requires time. And you know, most companies have hundreds of apps, or thousands of apps, they have to consider modernizing. So you want to then think through the journey, what apps are going to move, what gets modernized, what gets lifted-shifted, how many data centers are you compressing? There's a lot of data center, the term I've been hearing is "data center evacuations," but data center consolidation. So that there's some even energy savings advantages sometimes with that. But the whole point, I mean, back up to my whole point, the whole point is having the infrastructure that gives you the flexibility to make the journey on your cost advantages and your business requirements. Not being forced to it. Like, it's not really a philosophy, it's more of a business optimization strategy. >> When you think about application modernization and Kubernetes, how does NetApp, you know, fit into that, as a data layer? >> Well, so if you kind of think, you said, like our journey, Dave, was, when we started our life, we were doing basically virtualization of volumes and things for technical customers. And the servers were always bare metal servers that we got involved with back then. This is, like, going back 20 years. Then everyone moved to VMs, and, like, it's probably, today, I mean, getting to your question in a second, but today, loosely, 20% bare metal servers, 80% virtual machines today. And containers is growing, now a big growing piece. So, if you will, sort of another level of virtual machines in containers. And containers were historically stateless, meaning the storage didn't have anything to do. Storage is always the stateful area in the architectures. But as containers are getting used more, stateful containers have become a big deal. So we've put a lot of emphasis into a product line we call Astra that is the world's best data management for containers. And that's both a cloud service and used on-prem in a lot of my customers. It's a big growth area. So that's what, when I say, like, one partner that can do data management, just, that's what we have to do. We have to keep moving with our customers to the type of data they want to store, and how do you store it most efficiently? Hey, one last thought on this is, where I really see this happening, there's a booming business right now in artificial intelligence, and we call it modern data analytics, but people combining big data lakes with AI, and that's where some of this, a lot of the container work comes in. We've extended objects, we have a thing we call file-object duality, to make it easy to bridge the old world of files to the new world of objects. Those all go hand in hand with app modernization. >> Yeah, it's a great thing about this industry. It never sits still. And you're right, it's- >> It's why I'm in it. >> Me too. Yeah, it's so much fun. There's always something. >> It is an abstraction layer. There's always going to be another abstraction layer. Serverless is another example. It's, you know, primarily stateless, that's probably going to, you know, change over time. All right, last question. In thinking about this Broadcom acquisition of VMware, in the macro climate, put a sort of bow on where NetApp fits into this equation. What's the value you bring in this context? >> Oh yeah, well it's like I said earlier, I think it's the data layer of, it's being the data layer that gives you what you guys call the supercloud, that gives you the ability to choose which cloud. Another thing, all customers are running at least two clouds, and you want to be able to pick and choose, and do it your way. So being the data layer, VMware is going to be in our infrastructures for at least as long as I'm in the computer business, Dave. I'm getting a little old. So maybe, you know, but "decades" I think is an easy prediction, and we plan to work with VMware very closely, along with our customers, as they extend from on-prem to hybrid cloud operations. That's where I think this will go. >> Yeah, and I think you're absolutely right. Look at the business case for migrating off of VMware. It just doesn't make sense. It works, it's world class, it recover... They've done so much amazing, you know, they used to be called, Moritz called it the software mainframe, right? And that's kind of what it is. I mean, it means it doesn't go down, right? And it supports virtually any application, you know, around the world, so. >> And I think getting back to your original point about your article, from the very beginning, is, I think Broadcom's really getting a sense of what they've bought, and it's going to be, hopefully, I think it'll be really a fun, another fun era in our business. >> Well, and you can drive EBIT a couple of ways. You can cut, okay, fine. And I'm sure there's some redundancies that they'll find. But there's also, you can drive top-line revenue. And you know, we've seen how, you know, EMC and then Dell used that growth from VMware to throw off free cash flow, and it was just, you know, funded so much, you know, innovation. So innovation is the key. Hock Tan has talked about that a lot. I think there's a perception that Broadcom, you know, doesn't invest in R & D. That's not true. I think they just get very focused with that investment. So, Phil, I really appreciate your time. Thanks so much for joining us. >> Thanks a lot, Dave. It's fun being here. >> Yeah, our pleasure. And thank you for watching theCUBE, your leader in enterprise and emerging tech coverage. (upbeat music)

Published Date : Jan 31 2023

SUMMARY :

Good to see you again. the industry generally, There's a lot you can do I wrote a piece, you know, and how do I balance that out? a lot of talk about the macro, is that when you look at cost management, and you mentioned, you know, so that you can manage and how you see it evolving. to cost-optimize where you want to be. and I saw you go from kind And you know, and how do you store it most efficiently? And you're right, it's- Yeah, it's so much fun. What's the value you and you want to be able They've done so much amazing, you know, and it's going to be, and it was just, you know, Thanks a lot, Dave. And thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

PhilPERSON

0.99+

Phil BrothertonPERSON

0.99+

DellORGANIZATION

0.99+

80%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Phil BrothertonPERSON

0.99+

BroadcomORGANIZATION

0.99+

20%QUANTITY

0.99+

30 yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

$61 billionQUANTITY

0.99+

RaghuPERSON

0.99+

NetAppORGANIZATION

0.99+

second partQUANTITY

0.99+

1500QUANTITY

0.99+

one layerQUANTITY

0.99+

EMCORGANIZATION

0.99+

Hock TanPERSON

0.99+

todayDATE

0.98+

hundreds of appsQUANTITY

0.98+

NetAppTITLE

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

secondQUANTITY

0.97+

ETRORGANIZATION

0.97+

Thomas Been, DataStax | AWS re:Invent 2022


 

(intro music) >> Good afternoon guys and gals. Welcome back to The Strip, Las Vegas. It's "theCUBE" live day four of our coverage of "AWS re:Invent". Lisa Martin, Dave Vellante. Dave, we've had some awesome conversations the last four days. I can't believe how many people are still here. The AWS ecosystem seems stronger than ever. >> Yeah, last year we really noted the ecosystem, you know, coming out of the isolation economy 'cause everybody had this old pent up demand to get together and the ecosystem, even last year, we were like, "Wow." This year's like 10x wow. >> It really is 10x wow, it feels that way. We're going to have a 10x wow conversation next. We're bringing back DataStax to "theCUBE". Please welcome Thomas Bean, it's CMO. Thomas welcome to "theCUBE". >> Thanks, thanks a lot, thanks for having me. >> Great to have you, talk to us about what's going on at DataStax, it's been a little while since we talked to you guys. >> Indeed, so DataStax, we are the realtime data company and we've always been involved in technology such as "Apache Cassandra". We actually created to support and take this, this great technology to the market. And now we're taking it, combining it with other technologies such as "Apache Pulse" for streaming to provide a realtime data cloud. Which helps our users, our customers build applications faster and help them scale without limits. So it's all about mobilizing all of this information that is going to drive the application going to create the awesome experience, when you have a customer waiting behind their mobile phone, when you need a decision to take place immediately to, that's the kind of data that we, that we provide in the cloud on any cloud, but especially with, with AWS and providing the performance that technologies like "Apache Cassandra" are known for but also with market leading unit economics. So really empowering customers to operate at speed and scale. >> Speaking of customers, nobody wants less data slower. And one of the things I think we learned in the in the pan, during the pandemic was that access to realtime data isn't nice to have anymore for any business. It is table stakes, it's competitive advantage. There's somebody right behind in the rear view mirror ready to take over. How has the business model of DataStax maybe evolved in the last couple of years with the fact that realtime data is so critical? >> Realtime data has been around for some time but it used to be really niches. You needed a lot of, a lot of people a lot of funding actually to, to implement these, these applications. So we've adapted to really democratize it, made super easy to access. Not only to start developing but also scaling. So this is why we've taken these great technologies made them serverless cloud native on the cloud so that developers could really start easily and scale. So that be on project products could be taken to the, to the market. And in terms of customers, the patterns is we've seen enterprise customers, you were talking about the pandemic, the Home Depot as an example was able to deliver curbside pickup delivery in 30 days because they were already using DataStax and could adapt their business model with a real time application that combines you were just driving by and you would get the delivery of what exactly you ordered without having to go into the the store. So they shifted their whole business model. But we also see a real strong trend about customer experiences and increasingly a lot of tech companies coming because scale means success to them and building on, on our, on our stack to, to build our applications. >> So Lisa, it's interesting. DataStax and "theCUBE" were started the same year, 2010, and that's when it was the beginning of the ascendancy of the big data era. But of course back then there was, I mean very little cloud. I mean most of it was on-prem. And so data stacks had, you know, had obviously you mentioned a number of things that you had to do to become cloud friendly. >> Thomas: Yes. >> You know, a lot of companies didn't make it, make it through. You guys just raised a bunch of dough as well last summer. And so that's been quite a transformation both architecturally, you know, bringing the customers through. I presume part of that was because you had such a great open source community, but also you have a unique value problem. Maybe you could sort of describe that a little. >> Absolutely, so the, I'll start with the open source community where we see a lot of traction at the, at the moment. We were always very involved with, with the "Apache Cassandra". But what we're seeing right now with "Apache Cassandra" is, is a lot of traction, gaining momentum. We actually, we, the open source community just won an award, did an AMA, had a, a vote from their readers about the top open source projects and "Apache Cassandra" and "Apache Pulse" are part of the top three, which is, which is great. We also run a, in collaboration with the Apache Project, the, a series of events around the, around the globe called "Cassandra Days" where we had tremendous attendance. We, some of them, we had to change venue twice because there were more people coming. A lot of students, a lot of the big users of Cassandra like Apple, Netflix who spoke at these, at these events. So we see this momentum actually picking up and that's why we're also super excited that the Linux Foundation is running the Cassandra Summit in in March in San Jose. Super happy to bring that even back with the rest of the, of the community and we have big announcements to come. "Apache Cassandra" will, will see its next version with major advances such as the support of asset transactions, which is going to make it even more suitable to more use cases. So we're bringing that scale to more applications. So a lot of momentum in terms of, in terms of the, the open source projects. And to your point about the value proposition we take this great momentum to which we contribute a lot. It's not only about taking, it's about giving as well. >> Dave: Big committers, I mean... >> Exactly big contributors. And we also have a lot of expertise, we worked with all of the members of the community, many of them being our customers. So going to the cloud, indeed there was architectural work making Cassandra cloud native putting it on Kubernetes, having the right APIs for developers to, to easily develop on top of it. But also becoming a cloud company, building customer success, our own platform engineering. We, it's interesting because actually we became like our partners in a community. We now operate Cassandra in the cloud so that all of our customers can benefit from all the power of Cassandra but really efficiently, super rapidly, and also with a, the leading unit economies as I mentioned. >> How will the, the asset compliance affect your, you know, new markets, new use cases, you know, expand your TAM, can you explain that? >> I think it will, more applications will be able to tap into the power of, of "NoSQL". Today we see a lot on the customer experience as IOT, gaming platform, a lot of SaaS companies. But now with the ability to have transactions at the database level, we can, beyond providing information, we can go even deeper into the logic of the, of the application. So it makes Cassandra and therefore Astra which is our cloud service an even more suitable database we can address, address more even in terms of the transaction that the application itself will, will support. >> What are some of the business benefits that Cassandra delivers to customers in terms of business outcomes helping businesses really transform? >> So Cassandra brings skill when you have millions of customers, when you have million of data points to go through to serve each of the customers. One of my favorite example is Priceline, who runs entirely on our cloud service. You may see one offer, but it's actually everything they know about you and everything they have to offer matched while you are refreshing your page. This is the kind of power that Cassandra provide. But the thing to say about "Apache Cassandra", it used to be also a database that was a bit hard to manage and hard to develop with. This is why as part of the cloud, we wanted to change these aspects, provide developers the API they like and need and what the application need. Making it super simple to operate and, and, and super affordable, also cost effective to, to run. So the the value to your point, it's time to market. You go faster, you don't have to worry when you choose the right database you're not going to, going to have to change horse in the middle of the river, like sixth month down the line. And you know, you have the guarantee that you're going to get the performance and also the best, the best TCO which matters a lot. I think your previous person talking was addressing it. That's also important especially in the, in a current context. >> As a managed service, you're saying, that's the enabler there, right? >> Thomas: Exactly. >> Dave: That is the model today. I mean, you have to really provide that for customers. They don't want to mess with, you know, all the plumbing, right? I mean... >> Absolutely, I don't think people want to manage databases anymore, we do that very well. We take SLAs and such and even at the developer level what they want is an API so they get all the power. All of of this powered by Cassandra, but now they get it as a, and it's as simple as using as, as an API. >> How about the ecosystem? You mentioned the show in in San Jose in March and the Linux Foundation is, is hosting that, is that correct? >> Yes, absolutely. >> And what is it, Cassandra? >> Cassandra Summit. >> Dave: Cassandra Summit >> Yep. >> What's the ecosystem like today in Cassandra, can you just sort of describe that? >> Around Cassandra, you have actually the big hyperscalers. You have also a few other companies that are supporting Cassandra like technologies. And what's interesting, and that's been a, a something we've worked on but also the "Apache Project" has worked on. Working on a lot of the adjacent technologies, the data pipelines, all of the DevOps solutions to make sure that you can actually put Cassandra as part of your way to build these products and, and build these, these applications. So the, the ecosystem keeps on, keeps on growing and actually the, the Cassandra community keeps on opening the database so that it's, it's really easy to have it connect to the rest of the, the rest environment. And we benefit from all of this in our Astra cloud service. >> So things like machine learning, governance tools that's what you would expect in the ecosystem forming around it, right? So we'll see that in March. >> Machine learning is especially a very interesting use case. We see more and more of it. We recently did a, a nice video with one of our customers called Unifour who does exactly this using also our abstract cloud service. What they provide is they analyze videos of sales calls and they help actually the sellers telling them, "Okay here's what happened here was the customer sentiment". Because they have proof that the better the sentiment is, the shorter the sell cycle is going to be. So they teach the, the sellers on how to say the right things, how to control the thing. This is machine learning applied on video. Cassandra provides I think 200 data points per second that feeds this machine learning. And we see more and more of these use cases, realtime use cases. It happens on the fly when you are on your phone, when you have a, a fraud maybe to detect and to prevent. So it is going to be more and more and we see more and more of these integration at the open source level with technologies like even "Feast" project like "Apache Feast". But also in the, in, in the partners that we're working with integrating our Cassandra and our cloud service with. >> Where are customer conversations these days, given that every company has to be a data company. They have to be able to, to democratize data, allow access to it deep into the, into the organizations. Not just IT or the data organization anymore. But are you finding that the conversations are rising up the, up the stack? Is this, is this a a C-suite priority? Is this a board level conversation? >> So that's an excellent question. We actually ran a survey this summer called "The State of the Database" where we, we asked these tech leaders, okay what's top of mind for you? And real time actually was, was really one of the top priorities. And they explained for the one that who call themselves digital leaders that for 71% of them they could correlate directly the use of realtime data, the quality of their experience or their decision making with revenue. And that's really where the discussion is. And I think it's something we can relate to as users. We don't want the, I mean if the Starbucks apps take seconds to to respond there will be a riot over there. So that's, that's something we can feel. But it really, now it's tangible in, in business terms and now then they take a look at their data strategy, are we equipped? Very often they will see, yeah, we have pockets of realtime data, but we're not really able to leverage it. >> Lisa: Yeah. >> For ML use cases, et cetera. So that's a big trend that we're seeing on one end. On the other end, what we're seeing, and it's one of the things we discussed a lot at the event is that yeah cost is important. Growth at all, at all cost does not exist. So we see a lot of push on moving a lot of the workloads to the cloud to make them scale but at the best the best cost. And we also see some organizations where like, okay let's not let a good crisis go to waste and let's accelerate our innovation not at all costs. So that we see also a lot of new projects being being pushed but reasonable, starting small and, and growing and all of this fueled by, by realtime data, so interesting. >> The other big topic amongst the, the customer community is security. >> Yep. >> I presume it's coming up a lot. What's the conversation like with DataStax? >> That's a topic we've been working on intensely since the creation of Astra less than two years ago. And we keep on reinforcing as any, any cloud provider not only our own abilities in terms of making sure that customers can manage their own keys, et cetera. But also integrating to the rest of the, of the ecosystem when some, a lot of our customers are running on AWS, how do we integrate with PrivateLink and such? We fit exactly into their security environment on AWS and they use exactly the same management tool. Because this is also what used to cost a lot in the cloud services. How much do you have to do to wire them and, and manage. And there are indeed compliance and governance challenges. So that's why making sure that it's fully connected that they have full transparency on what's happening is, is a big part of the evolution. It's always, security is always something you're working on but it's, it's a major topic for us. >> Yep, we talk about that on pretty much every event. Security, which we could dive into, but we're out of time. Last question for you. >> Thomas: Yes. >> We're talking before we went live, we're both big Formula One fans. Say DataStax has the opportunity to sponsor a team and you get the whole side pod to, to put like a phrase about DataStax on the side pod of this F1 car. (laughter) Like a billboard, what does it say? >> Billboard, because an F1 car goes pretty fast, it will be hard to, be hard to read but, "Twice the performance at half the cost, try Astra a cloud service." >> Drop the mike. Awesome, Thomas, thanks so much for joining us. >> Thank for having me. >> Pleasure having you guys on the program. For our guest, Thomas Bean and Dave Vellante, I'm Lisa Martin and you're watching "theCUBE" live from day four of our coverage. "theCUBE", the leader in live tech coverage. (outro music)

Published Date : Dec 1 2022

SUMMARY :

the last four days. really noted the ecosystem, We're going to have a 10x Thanks, thanks a lot, we talked to you guys. in the cloud on any cloud, in the pan, during the pandemic was And in terms of customers, the patterns is of the ascendancy of the big data era. bringing the customers through. A lot of students, a lot of the big users members of the community, of the application. But the thing to say Dave: That is the model today. even at the developer level of the DevOps solutions the ecosystem forming around it, right? the shorter the sell cycle is going to be. into the organizations. "The State of the Database" where we, of the things we discussed the customer community is security. What's the conversation of the ecosystem when some, Yep, we talk about that Say DataStax has the opportunity to "Twice the performance at half the cost, Drop the mike. guys on the program.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

ThomasPERSON

0.99+

Dave VellantePERSON

0.99+

LisaPERSON

0.99+

CassandraPERSON

0.99+

MarchDATE

0.99+

San JoseLOCATION

0.99+

DavePERSON

0.99+

AppleORGANIZATION

0.99+

Thomas BeanPERSON

0.99+

AWSORGANIZATION

0.99+

DataStaxORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

71%QUANTITY

0.99+

Thomas BeenPERSON

0.99+

OneQUANTITY

0.99+

theCUBETITLE

0.99+

last yearDATE

0.99+

sixth monthQUANTITY

0.99+

Thomas BeanPERSON

0.99+

UnifourORGANIZATION

0.99+

30 daysQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

oneQUANTITY

0.99+

PricelineORGANIZATION

0.99+

TwiceQUANTITY

0.99+

eachQUANTITY

0.99+

StarbucksORGANIZATION

0.99+

twiceQUANTITY

0.99+

2010DATE

0.98+

10xQUANTITY

0.98+

TodayDATE

0.98+

Cassandra SummitEVENT

0.97+

millions of customersQUANTITY

0.97+

last summerDATE

0.97+

theCUBEORGANIZATION

0.96+

this summerDATE

0.96+

bothQUANTITY

0.96+

pandemicEVENT

0.95+

TAMORGANIZATION

0.95+

todayDATE

0.95+

CassandraTITLE

0.95+

one endQUANTITY

0.95+

This yearDATE

0.94+

DataStaxTITLE

0.94+

day fourQUANTITY

0.94+

halfQUANTITY

0.93+

Apache CassandraORGANIZATION

0.93+

top threeQUANTITY

0.93+

Cassandra DaysEVENT

0.92+

ApacheORGANIZATION

0.91+

NoSQLTITLE

0.89+

200 data points per secondQUANTITY

0.89+

Apache ProjectORGANIZATION

0.88+

BillboardORGANIZATION

0.88+

less thanDATE

0.88+

The Strip, Las VegasLOCATION

0.87+

one offerQUANTITY

0.85+

CassandraORGANIZATION

0.85+

Angelo Fausti & Caleb Maclachlan | The Future is Built on InfluxDB


 

>> Okay. We're now going to go into the customer panel, and we'd like to welcome Angelo Fausti, who's a software engineer at the Vera C. Rubin Observatory, and Caleb Maclachlan who's senior spacecraft operations software engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss folks this interview. Caleb, let's start with you. You work for an extremely cool company, you're launching satellites into space. Of course doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem. >> Yeah, absolutely. And thanks for having me here by the way. So Loft Orbital is a company that's a series B startup now, who, and our mission basically is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have a big software teams, and then eventually worry about, a bunch like, just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access, to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP is getting your programs, your mission deployed on orbit with access to different sensors, cameras, radios, stuff like that. So, that's kind of our mission and just to give a really brief example of the kind of customer that we can serve. There's a really cool company called Totum Labs, who is working on building IoT cons, an IoT constellation for, internet of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT which means you have this little modem inside a container that container that you track from anywhere in the world as it's going across the ocean. So, and it's really little, and they've been able to stay a small startup that's focused on their product, which is the, that super crazy, complicated, cool radio, while we handle the whole space segment for them, which just, you know, before Loft was really impossible. So that's our mission is providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving a huge variety of customers with all kinds of different missions, and obviously generating a ton of data in space that we've got to handle. >> Yeah. So amazing Caleb, what you guys do. Now, I know you were lured to the skies very early in your career, but how did you kind of land in this business? >> Yeah, so, I guess just a little bit about me. For some people, they don't necessarily know what they want to do like earlier in their life. For me I was five years old and I knew I want to be in the space industry. So, I started in the Air Force, but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of actually. So, I've kind of started out in satellites, spent some time in working in the launch industry on rockets, then, now I'm here back in satellites and honestly, this is the most exciting of the different space startups that I've been a part of. >> Super interesting. Okay. Angelo, let's talk about the Rubin Observatory. Vera C. Rubin, famous woman scientist, galaxy guru. Now you guys, the Observatory, you're up way up high, you get a good look at the Southern sky. And I know COVID slowed you guys down a bit, but no doubt you continued to code away on the software. I know you're getting close, you got to be super excited, give us the update on the Observatory and your role. >> All right. So, yeah. Rubin is a state of the art observatory that is in construction on a remote mountain in Chile. And, with Rubin we'll conduct the large survey of space and time. We're going to observe the sky with eight meter optical telescope and take 1000 pictures every night with 2.2 Gigapixel camera. And we are going to do that for 10 years, which is the duration of the survey. >> Yeah, amazing project. Now, you earned a doctor of philosophy so you probably spent some time thinking about what's out there, and then you went out to earn a PhD in astronomy and astrophysics. So, this is something that you've been working on for the better part of your career, isn't it? >> Yeah, that's right, about 15 years. I studied physics in college. Then I got a PhD in astronomy. And, I worked for about five years in another project, the Dark Energy Survey before joining Rubin in 2015. >> Yeah, impressive. So it seems like both your organizations are looking at space from two different angles. One thing you guys both have in common of course is software, and you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb you could start. >> Yeah, absolutely. So, the first company that I extensively used InfluxDB in, was a launch startup called Astra. And we were in the process of designing our first generation rocket there, and testing the engines, pumps, everything that goes into a rocket. And, when I joined the company our data story was not very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. And at first, there, you know, that's the way that a lot of engineers and scientists are used to working. And at first that was, like people weren't entirely sure that that was, that needed to change. But, it's, something, the nice thing about InfluxDB is that, it's so easy to deploy. So as, our software engineering team was able to get it deployed and, up and running very quickly and then quickly also backport all of the data that we collected this far into Influx. And, what was amazing to see and is kind of the super cool moment with Influx is, when we hooked that up to Grafana, Grafana as the visualization platform we used with Influx, 'cause it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data, where they could just almost instantly easily discover data that they hadn't been able to see before, and take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming, and, I saw them implementing like crazy rocket equation type stuff in Influx, and it just was totally game changing for how we tested. >> So Angelo, I was explaining in my open, that you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about in the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? >> Yeah, correct. So, I work with the data management team, and my first project was the record metrics that measured the performance of our software, the software that we used to process the data. So I started implementing that in our relational database. But then I realized that in fact I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB, and that was back in 2018. The, another use for InfluxDB that I'm also interested is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, we call a visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long, with about 1000 points every night. It's actually not too much data compared to other problems. It's really just a different time scale. >> The telescope at the Rubin Observatory is like, pun intended, I guess the star of the show. And I believe I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. Like, that's like 40 moons in an image, amazingly fast as well. What else can you tell us about the telescope? >> This telescope it has to move really fast. And, it also has to carry the primary mirror which is an eight meter piece of glass. It's very heavy. And it has to carry a camera which has about the size of a small car. And this whole structure weighs about 300 tons. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about it's design is that, the telescope, this 300 tons structure, it sits on a tiny film of oil, which has the diameter of human hair. And that makes an, almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So, each image has, in diameter the size of about seven full moons. And, with that, we can map the entire sky in only three days. And of course, during operations everything's controlled by software and it is automatic. There's a very complex piece of software called the Scheduler, which is responsible for moving the telescope, and the camera, which is recording 15 terabytes of data every night. >> And Angelo, all this data lands in InfluxDB, correct? And what are you doing with all that data? >> Yeah, actually not. So we use InfluxDB to record engineering data and metadata about the observations. Like telemetry, events, and commands from the telescope. That's a much smaller data set compared to the images. But it is still challenging because you have some high frequency data that the system needs to keep up, and, we need to store this data and have it around for the lifetime of the project. >> Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about the, you got these dishwasher size satellites, kind of using a multi-tenant model, I think it's genius. But tell us about the satellites themselves. >> Yeah, absolutely. So, we have in space some satellites already that as you said, are like dishwasher, mini fridge kind of size. And we're working on a bunch more that are a variety of sizes from shoebox to, I guess, a few times larger than what we have today. And it is, we do shoot to have effectively something like a multi-tenant model where we will buy a bus off the shelf. The bus is what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power, it has the solar panels, it has some radios attached to it. It handles the attitude control, basically steers the spacecraft in orbit, and then we build also in-house, what we call our payload hub which is, has all, any customer payloads attached and our own kind of Edge processing sort of capabilities built into it. And, so we integrate that, we launch it, and those things because they're in lower Earth orbit, they're orbiting the earth every 90 minutes. That's, seven kilometers per second which is several times faster than a speeding bullet. So we have one of the unique challenges of operating spacecraft in lower Earth orbit is that generally you can't talk to them all the time. So, we're managing these things through very brief windows of time, where we get to talk to them through our ground sites, either in Antarctica or in the North pole region. >> Talk more about how you use InfluxDB to make sense of this data through all this tech that you're launching into space. >> We basically, previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was so slow and the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. So we migrated to InfluxDB to store our time series telemetry from the spacecraft. So, that's things like power level, voltage, currents, counts, whatever metadata we need to monitor about the spacecraft, we now store that in InfluxDB. And that has, now we can actually easily store the entire volume of data for the mission life so far without having to worry about the size bloating to an unmanageable amount, and we can also seamlessly query large chunks of data. Like if I need to see, you know, for example, as an operator, I might want to see how my battery state of charge is evolving over the course of the year, I can have, plot in an Influx that loads that in a fraction of a second for a year's worth of data because it does, intelligent, it can intelligently group the data by assigning time interval. So, it's been extremely powerful for us to access the data. And, as time has gone on, we've gradually migrated more and more of our operating data into Influx. >> Yeah. Let's talk a little bit about, we throw this term around a lot of, you know, data driven, a lot of companies say, "Oh yes, we're data driven." But you guys really are, I mean, you got data at the core. Caleb, what does that mean to you? >> Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astra where our engineer's feedback loop went from a lot of kind of slow researching, digging into the data to like an instant, instantaneous almost, seeing the data, making decisions based on it immediately rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. But to give another practical example, as I said, we have a huge amount of data that comes down every orbit and we need to be able to ingest all of that data almost instantaneously and provide it to the operator in near real time, about a second worth of latency is all that's acceptable for us to react to see what is coming down from the spacecraft. And building that pipeline is challenging from a software engineering standpoint. My primary language is Python which isn't necessarily that fast. So what we've done is started, and the goal of being data-driven is publish metrics on individual, how individual pieces of our data processing pipeline are performing into Influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow. And what that has done is allow us to make intelligent decisions on our software development roadmap where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency, just because we have that visibility into where the real problems are. And sometimes we've found ourselves before we started doing this, kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now that we're being a bit more data driven there, we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale from supporting a couple of satellites to supporting many, many satellites at once. >> Yeah, of course is how you reduced those dead ends. Maybe Angelo you could talk about what sort of data-driven means to you and your teams. >> I would say that, having real time visibility to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect with the telescope have good quality, and, that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible and then start fixing problems. >> Caleb, what are your sort of event, you know, intervals like? >> So I would say that, as of today on the spacecraft, the event, the level of timing that we deal with probably tops out at about 20 Hertz, 20 measurements per second on things like our gyroscopes. But, the, I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications and I'll give an example from when I worked at, on the rockets at Astra. There, our baseline data rate that we would ingest data during a test is 500 Hertz. So 500 samples per second, and in some cases we would actually need to ingest much higher rate data, even up to like 1.5 kilohertz, so extremely, extremely high precision data there where timing really matters a lot. And, you know, I can, one of the really powerful things about Influx is the fact that it can handle this. That's one of the reasons we chose it, because, there's, times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job we often zoom out to look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second, and you need to see same thing as Angelo just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers, so that can be something like, "Hey, I opened this valve at exactly this time," and that goes, we want to have that at, micro, or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at this exact moment, was that before or after this valve opened? That kind of visibility is critical in these kind of scientific applications, and absolutely game changing to be able to see that in near real time, and with, a really easy way for engineers to be able to visualize this data themselves without having to wait for us software engineers to go build it for them. >> Can the scientists do self-serve or do you have to design and build all the analytics and queries for your scientists? >> Well, I think that's absolutely, from my perspective that's absolutely one of the best things about Influx and what I've seen be game changing is that, generally I'd say anyone can learn to use Influx. And honestly, most of our users might not even know they're using Influx, because, the interface that we expose to them is Grafana, which is a generic graphing, open source graphing library that is very similar to Influx zone Chronograf. >> Sure. >> And what it does is, it provides this almost, it's a very intuitive UI for building your queries. So, you choose a measurement and it shows a dropdown of available measurements. And then you choose the particular fields you want to look at, and again, that's a dropdown. So, it's really easy for our users to discover and there's kind of point and click options for doing math, aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality that Influx provides. >> Putting data in the hands of those who have the context, the domain experts is key. Angelo, is it the same situation for you, is it self-serve? >> Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards because they know what exactly what they need to visualize. >> Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company we weren't using InfluxDB and we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations. >> Guys, this has been really formative, it's pretty exciting to see how the edge, is mountaintops, lower Earth orbits, I mean space is the ultimate edge, isn't it? I wonder if you could answer two questions to wrap here. You know, what comes next for you guys? And is there something that you're really excited about that you're working on? Caleb maybe you could go first and then Angelo you can bring us home. >> Basically what's next for Loft Orbital is more satellites, a greater push towards infrastructure, and really making, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, making that happen. It's extremely exciting, an extremely exciting time to be in this company and to be in this industry as a whole. Because there are so many interesting applications out there, so many cool ways of leveraging space that people are taking advantage of, and with companies like SpaceX and the, now rapidly lowering cost of launch it's just a really exciting place to be in. We're launching more satellites, we are scaling up for some constellations, and our ground system has to be improved to match. So, there's a lot of improvements that we're working on to really scale up our control software to be best in class and make it capable of handling such a large workload, so. >> Are you guys hiring? >> We are absolutely hiring, so I would, we have positions all over the company, so, we need software engineers, we need people who do more aerospace specific stuff. So absolutely, I'd encourage anyone to check out the Loft Orbital website, if this is at all interesting. >> All right, Angelo, bring us home. >> Yeah. So what's next for us is really getting this telescope working and collecting data. And when that's happened is going to be just a deluge of data coming out of this camera and handling all that data is going to be really challenging. Yeah, I want to be here for that, I'm looking forward. Like for next year we have like an important milestone, which is our commissioning camera, which is a simplified version of the full camera, it's going to be on sky, and so yeah, most of the system has to be working by then. >> Nice. All right guys, with that we're going to end it. Thank you so much, really fascinating, and thanks to InfluxDB for making this possible, really groundbreaking stuff, enabling value creation at the Edge, in the cloud, and of course, beyond at the space. So, really transformational work that you guys are doing, so congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave Vellante, and you're watching theCUBE, the leader in high tech enterprise coverage. >> Welcome. Telegraf is a popular open source data collection agent. Telegraf collects data from hundreds of systems like IoT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists, to large corporate teams. The Telegraf project has a very welcoming and active Open Source community. Learn how to get involved by visiting the Telegraf GitHub page. Whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraf. We'd love to hear what you're building. >> Thanks for watching Moving the World with InfluxDB, made possible by Influx Data. I hope you learned some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you want to scale cost effectively with the highest performance, and you're analyzing metrics and data over time, times series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link in the resources below. Remember, all these recordings are going to be available on demand of thecube.net and influxdata.com, so check those out. And poke around Influx Data. They are the folks behind InfluxDB, and one of the leaders in the space. We hope you enjoyed the program, this is Dave Vellante for theCUBE, we'll see you soon. (upbeat music)

Published Date : May 18 2022

SUMMARY :

and what you guys do of the kind of customer that we can serve. So amazing Caleb, what you guys do. of the different space startups the Rubin Observatory. Rubin is a state of the art observatory and then you went out to the Dark Energy Survey and you both use InfluxDB and is kind of the super in the example that Caleb just gave, the software that we that it's going to be the first and the camera, that the system needs to keep up, let's bring you back in. is that generally you can't to make sense of this data all of the data that we were getting. But you guys really are, I digging into the data to like an instant, means to you and your teams. the images that we collect of the ability to have high precision data because, the interface that and functionality that Influx provides. Angelo, is it the same situation for you, we have the astronomers and we were dealing with and then Angelo you can bring us home. and to be in this industry as a whole. out the Loft Orbital website, most of the system has and of course, beyond at the space. and hobbyists, to large corporate teams. and one of the leaders in the space.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CalebPERSON

0.99+

Caleb MaclachlanPERSON

0.99+

Dave VellantePERSON

0.99+

Angelo FaustiPERSON

0.99+

Loft OrbitalORGANIZATION

0.99+

ChileLOCATION

0.99+

Totum LabsORGANIZATION

0.99+

2015DATE

0.99+

10 yearsQUANTITY

0.99+

AntarcticaLOCATION

0.99+

1000 picturesQUANTITY

0.99+

SpaceXORGANIZATION

0.99+

2018DATE

0.99+

15 terabytesQUANTITY

0.99+

40 moonsQUANTITY

0.99+

Vera C. RubinPERSON

0.99+

InfluxTITLE

0.99+

PythonTITLE

0.99+

300 tonsQUANTITY

0.99+

500 HertzQUANTITY

0.99+

AngeloPERSON

0.99+

two questionsQUANTITY

0.99+

earthLOCATION

0.99+

next yearDATE

0.99+

TelegrafORGANIZATION

0.99+

AstraORGANIZATION

0.99+

InfluxDBTITLE

0.99+

todayDATE

0.99+

2.2 GigapixelQUANTITY

0.99+

bothQUANTITY

0.99+

each imageQUANTITY

0.99+

thecube.netOTHER

0.99+

North poleLOCATION

0.99+

first projectQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

EarthLOCATION

0.99+

oneQUANTITY

0.99+

eight meterQUANTITY

0.98+

first generationQUANTITY

0.98+

Vera C. Rubin ObservatoryORGANIZATION

0.98+

three ordersQUANTITY

0.98+

influxdata.comOTHER

0.98+

1.5 kilohertzQUANTITY

0.98+

three daysQUANTITY

0.98+

first companyQUANTITY

0.97+

one thingQUANTITY

0.97+

Moving the WorldTITLE

0.97+

GrafanaTITLE

0.97+

two different anglesQUANTITY

0.97+

about 1000 pointsQUANTITY

0.97+

Rubin ObservatoryLOCATION

0.96+

hundreds of systemsQUANTITY

0.96+

The Future Is Built On InFluxDB


 

>>Time series data is any data that's stamped in time in some way that could be every second, every minute, every five minutes, every hour, every nanosecond, whatever it might be. And typically that data comes from sources in the physical world like devices or sensors, temperature, gauges, batteries, any device really, or things in the virtual world could be software, maybe it's software in the cloud or data and containers or microservices or virtual machines. So all of these items, whether in the physical or virtual world, they're generating a lot of time series data. Now time series data has been around for a long time, and there are many examples in our everyday lives. All you gotta do is punch up any stock, ticker and look at its price over time and graphical form. And that's a simple use case that anyone can relate to and you can build timestamps into a traditional relational database. >>You just add a column to capture time and as well, there are examples of log data being dumped into a data store that can be searched and captured and ingested and visualized. Now, the problem with the latter example that I just gave you is that you gotta hunt and Peck and search and extract what you're looking for. And the problem with the former is that traditional general purpose databases they're designed as sort of a Swiss army knife for any workload. And there are a lot of functions that get in the way and make them inefficient for time series analysis, especially at scale. Like when you think about O T and edge scale, where things are happening super fast, ingestion is coming from many different sources and analysis often needs to be done in real time or near real time. And that's where time series databases come in. >>They're purpose built and can much more efficiently support ingesting metrics at scale, and then comparing data points over time, time series databases can write and read at significantly higher speeds and deal with far more data than traditional database methods. And they're more cost effective instead of throwing processing power at the problem. For example, the underlying architecture and algorithms of time series databases can optimize queries and they can reclaim wasted storage space and reuse it. At scale time, series databases are simply a better fit for the job. Welcome to moving the world with influx DB made possible by influx data. My name is Dave Valante and I'll be your host today. Influx data is the company behind InfluxDB. The open source time series database InfluxDB is designed specifically to handle time series data. As I just explained, we have an exciting program for you today, and we're gonna showcase some really interesting use cases. >>First, we'll kick it off in our Palo Alto studios where my colleague, John furrier will interview Evan Kaplan. Who's the CEO of influx data after John and Evan set the table. John's gonna sit down with Brian Gilmore. He's the director of IOT and emerging tech at influx data. And they're gonna dig into where influx data is gaining traction and why adoption is occurring and, and why it's so robust. And they're gonna have tons of examples and double click into the technology. And then we bring it back here to our east coast studios, where I get to talk to two practitioners, doing amazing things in space with satellites and modern telescopes. These use cases will blow your mind. You don't want to miss it. So thanks for being here today. And with that, let's get started. Take it away. Palo Alto. >>Okay. Today we welcome Evan Kaplan, CEO of influx data, the company behind influx DB. Welcome Evan. Thanks for coming on. >>Hey John, thanks for having me >>Great segment here on the influx DB story. What is the story? Take us through the history. Why time series? What's the story >><laugh> so the history history is actually actually pretty interesting. Um, Paul dicks, my partner in this and our founder, um, super passionate about developers and developer experience. And, um, he had worked on wall street building a number of time series kind of platform trading platforms for trading stocks. And from his point of view, it was always what he would call a yak shave, which means you had to do a ton of work just to start doing work, which means you had to write a bunch of extrinsic routines. You had to write a bunch of application handling on existing relational databases in order to come up with something that was optimized for a trading platform or a time series platform. And he sort of, he just developed this real clear point of view is this is not how developers should work. And so in 2013, he went through why Combinator and he built something for, he made his first commit to open source in flu DB at the end of 2013. And, and he basically, you know, from my point of view, he invented modern time series, which is you start with a purpose-built time series platform to do these kind of workloads. And you get all the benefits of having something right outta the box. So a developer can be totally productive right away. >>And how many people in the company what's the history of employees and stuff? >>Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people now. Um, the company, I joined the company in 2016 and I love Paul's vision. And I just had a strong conviction about the relationship between time series and IOT. Cuz if you think about it, what sensors do is they speak time, series, pressure, temperature, volume, humidity, light, they're measuring they're instrumenting something over time. And so I thought that would be super relevant over long term and I've not regretted it. >>Oh no. And it's interesting at that time, go back in the history, you know, the role of databases, well, relational database is the one database to rule the world. And then as clouds started coming in, you starting to see more databases, proliferate types of databases and time series in particular is interesting. Cuz real time has become super valuable from an application standpoint, O T which speaks time series means something it's like time matters >>Time. >>Yeah. And sometimes data's not worth it after the time, sometimes it worth it. And then you get the data lake. So you have this whole new evolution. Is this the momentum? What's the momentum, I guess the question is what's the momentum behind >>You mean what's causing us to grow. So >>Yeah, the time series, why is time series >>And the >>Category momentum? What's the bottom line? >>Well, think about it. You think about it from a broad, broad sort of frame, which is where, what everybody's trying to do is build increasingly intelligent systems, whether it's a self-driving car or a robotic system that does what you want to do or a self-healing software system, everybody wants to build increasing intelligent systems. And so in order to build these increasing intelligent systems, you have to instrument the system well, and you have to instrument it over time, better and better. And so you need a tool, a fundamental tool to drive that instrumentation. And that's become clear to everybody that that instrumentation is all based on time. And so what happened, what happened, what happened what's gonna happen? And so you get to these applications like predictive maintenance or smarter systems. And increasingly you want to do that stuff, not just intelligently, but fast in real time. So millisecond response so that when you're driving a self-driving car and the system realizes that you're about to do something, essentially you wanna be able to act in something that looks like real time, all systems want to do that, want to be more intelligent and they want to be more real time. And so we just happen to, you know, we happen to show up at the right time in the evolution of a >>Market. It's interesting near real time. Isn't good enough when you need real time. >><laugh> yeah, it's not, it's not. And it's like, and it's like, everybody wants, even when you don't need it, ironically, you want it. It's like having the feature for, you know, you buy a new television, you want that one feature, even though you're not gonna use it, you decide that your buying criteria real time is a buying criteria >>For, so you, I mean, what you're saying then is near real time is getting closer to real time as possible, as fast as possible. Right. Okay. So talk about the aspect of data, cuz we're hearing a lot of conversations on the cube in particular around how people are implementing and actually getting better. So iterating on data, but you have to know when it happened to get, know how to fix it. So this is a big part of how we're seeing with people saying, Hey, you know, I wanna make my machine learning algorithms better after the fact I wanna learn from the data. Um, how does that, how do you see that evolving? Is that one of the use cases of sensors as people bring data in off the network, getting better with the data knowing when it happened? >>Well, for sure. So, so for sure, what you're saying is, is, is none of this is non-linear, it's all incremental. And so if you take something, you know, just as an easy example, if you take a self-driving car, what you're doing is you're instrumenting that car to understand where it can perform in the real world in real time. And if you do that, if you run the loop, which is I instrumented, I watch what happens, oh, that's wrong? Oh, I have to correct for that. I correct for that in the software. If you do that for a billion times, you get a self-driving car, but every system moves along that evolution. And so you get the dynamic of, you know, of constantly instrumenting watching the system behave and do it. And this and sets up driving car is one thing. But even in the human genome, if you look at some of our customers, you know, people like, you know, people doing solar arrays, people doing power walls, like all of these systems are getting smarter. >>Well, let's get into that. What are the top applications? What are you seeing for your, with in, with influx DB, the time series, what's the sweet spot for the application use case and some customers give some >>Examples. Yeah. So it's, it's pretty easy to understand on one side of the equation that's the physical side is sensors are sensors are getting cheap. Obviously we know that and they're getting the whole physical world is getting instrumented, your home, your car, the factory floor, your wrist, watch your healthcare, you name it. It's getting instrumented in the physical world. We're watching the physical world in real time. And so there are three or four sweet spots for us, but, but they're all on that side. They're all about IOT. So they're think about consumer IOT projects like Google's nest todo, um, particle sensors, um, even delivery engines like rapid who deliver the Instacart of south America, like anywhere there's a physical location do and that's on the consumer side. And then another exciting space is the industrial side factories are changing dramatically over time. Increasingly moving away from proprietary equipment to develop or driven systems that run operational because what, what has to get smarter when you're building, when you're building a factory is systems all have to get smarter. And then, um, lastly, a lot in the renewables sustainability. So a lot, you know, Tesla, lucid, motors, Cola, motors, um, you know, lots to do with electric cars, solar arrays, windmills, arrays, just anything that's gonna get instrumented that where that instrumentation becomes part of what the purpose >>Is. It's interesting. The convergence of physical and digital is happening with the data IOT. You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary OT systems. Now becoming more IP enabled internet protocol and now edge compute, getting smaller, faster, cheaper AI going to the edge. Now you have all kinds of new capabilities that bring that real time and time series opportunity. Are you seeing IOT going to a new level? What was the, what's the IOT where's the IOT dots connecting to because you know, as these two cultures merge yeah. Operations, basically industrial factory car, they gotta get smarter, intelligent edge is a buzzword, but I mean, it has to be more intelligent. Where's the, where's the action in all this. So the >>Action, really, it really at the core, it's at the developer, right? Because you're looking at these things, it's very hard to get an off the shelf system to do the kinds of physical and software interaction. So the actions really happen at the developer. And so what you're seeing is a movement in the world that, that maybe you and I grew up in with it or OT moving increasingly that developer driven capability. And so all of these IOT systems they're bespoke, they don't come out of the box. And so the developer, the architect, the CTO, they define what's my business. What am I trying to do? Am I trying to sequence a human genome and figure out when these genes express theself or am I trying to figure out when the next heart rate monitor's gonna show up on my apple watch, right? What am I trying to do? What's the system I need to build. And so starting with the developers where all of the good stuff happens here, which is different than it used to be, right. Used to be you'd buy an application or a service or a SA thing for, but with this dynamic, with this integration of systems, it's all about bespoke. It's all about building >>Something. So let's get to the developer real quick, real highlight point here is the data. I mean, I could see a developer saying, okay, I need to have an application for the edge IOT edge or car. I mean, we're gonna have, I mean, Tesla's got applications of the car it's right there. I mean, yes, there's the modern application life cycle now. So take us through how this impacts the developer. Does it impact their C I C D pipeline? Is it cloud native? I mean, where does this all, where does this go to? >>Well, so first of all, you're talking about, there was an internal journey that we had to go through as a company, which, which I think is fascinating for anybody who's interested is we went from primarily a monolithic software that was open sourced to building a cloud native platform, which means we had to move from an agile development environment to a C I C D environment. So to a degree that you are moving your service, whether it's, you know, Tesla monitoring your car and updating your power walls, right. Or whether it's a solar company updating the arrays, right. To degree that that service is cloud. Then increasingly remove from an agile development to a C I C D environment, which you're shipping code to production every day. And so it's not just the developers, all the infrastructure to support the developers to run that service and that sort of stuff. I think that's also gonna happen in a big way >>When your customer base that you have now, and as you see, evolving with infl DB, is it that they're gonna be writing more of the application or relying more on others? I mean, obviously there's an open source component here. So when you bring in kind of old way, new way old way was I got a proprietary, a platform running all this O T stuff and I gotta write, here's an application. That's general purpose. Yeah. I have some flexibility, somewhat brittle, maybe not a lot of robustness to it, but it does its job >>A good way to think about this is versus a new way >>Is >>What so yeah, good way to think about this is what, what's the role of the developer slash architect CTO that chain within a large, within an enterprise or a company. And so, um, the way to think about it is I started my career in the aerospace industry <laugh> and so when you look at what Boeing does to assemble a plane, they build very, very few of the parts. Instead, what they do is they assemble, they buy the wings, they buy the engines, they assemble, actually, they don't buy the wings. It's the one thing they buy the, the material for the w they build the wings, cuz there's a lot of tech in the wings and they end up being assemblers smart assemblers of what ends up being a flying airplane, which is pretty big deal even now. And so what, what happens with software people is they have the ability to pull from, you know, the best of the open source world. So they would pull a time series capability from us. Then they would assemble that with, with potentially some ETL logic from somebody else, or they'd assemble it with, um, a Kafka interface to be able to stream the data in. And so they become very good integrators and assemblers, but they become masters of that bespoke application. And I think that's where it goes, cuz you're not writing native code for everything. >>So they're more flexible. They have faster time to market cuz they're assembling way faster and they get to still maintain their core competency. Okay. Their wings in this case, >>They become increasingly not just coders, but designers and developers. They become broadly builders is what we like to think of it. People who start and build stuff by the way, this is not different than the people just up the road Google have been doing for years or the tier one, Amazon building all their own. >>Well, I think one of the things that's interesting is is that this idea of a systems developing a system architecture, I mean systems, uh, uh, systems have consequences when you make changes. So when you have now cloud data center on premise and edge working together, how does that work across the system? You can't have a wing that doesn't work with the other wing kind of thing. >>That's exactly. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in for us. We've really been thoughtful about that because IOT it's critical. So our open source edge has the same API as our cloud native stuff that has enterprise on pre edge. So our multiple products have the same API and they have a relationship with each other. They can talk with each other. So the builder builds it once. And so this is where, when you start thinking about the components that people have to use to build these services is that you wanna make sure, at least that base layer, that database layer, that those components talk to each other. >>So I'll have to ask you if I'm the customer. I put my customer hat on. Okay. Hey, I'm dealing with a lot. >>That mean you have a PO for <laugh> >>A big check. I blank check. If you can answer this question only if the tech, if, if you get the question right, I got all this important operation stuff. I got my factory, I got my self-driving cars. This isn't like trivial stuff. This is my business. How should I be thinking about time series? Because now I have to make these architectural decisions, as you mentioned, and it's gonna impact my application development. So huge decision point for your customers. What should I care about the most? So what's in it for me. Why is time series >>Important? Yeah, that's a great question. So chances are, if you've got a business that was, you know, 20 years old or 25 years old, you were already thinking about time series. You probably didn't call it that you built something on a Oracle or you built something on IBM's DB two, right. And you made it work within your system. Right? And so that's what you started building. So it's already out there. There are, you know, there are probably hundreds of millions of time series applications out there today. But as you start to think about this increasing need for real time, and you start to think about increasing intelligence, you think about optimizing those systems over time. I hate the word, but digital transformation. Then you start with time series. It's a foundational base layer for any system that you're gonna build. There's no system I can think of where time series, shouldn't be the foundational base layer. If you just wanna store your data and just leave it there and then maybe look it up every five years. That's fine. That's not time. Series time series is when you're building a smarter, more intelligent, more real time system. And the developers now know that. And so the more they play a role in building these systems, the more obvious it becomes. >>And since I have a PO for you and a big check, yeah. What is, what's the value to me as I, when I implement this, what's the end state, what's it look like when it's up and running? What's the value proposition for me. What's an >>So, so when it's up and running, you're able to handle the queries, the writing of the data, the down sampling of the data, they're transforming it in near real time. So that the other dependencies that a system that gets for adjusting a solar array or trading energy off of a power wall or some sort of human genome, those systems work better. So time series is foundational. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build a really compelling, intelligent system. I think that's what developers and archs are seeing now. >>Bottom line, final word. What's in it for the customer. What's what, what's your, um, what's your statement to the customer? What would you say to someone looking to do something in time series on edge? >>Yeah. So, so it's pretty clear to clear to us that if you're building, if you view yourself as being in the build business of building systems that you want 'em to be increasingly intelligent, self-healing autonomous. You want 'em to operate in real time that you start from time series. But I also wanna say what's in it for us influx what's in it for us is people are doing some amazing stuff. You know, I highlighted some of the energy stuff, some of the human genome, some of the healthcare it's hard not to be proud or feel like, wow. Yeah. Somehow I've been lucky. I've arrived at the right time, in the right place with the right people to be able to deliver on that. That's that's also exciting on our side of the equation. >>Yeah. It's critical infrastructure, critical, critical operations. >>Yeah. >>Yeah. Great stuff, Evan. Thanks for coming on. Appreciate this segment. All right. In a moment, Brian Gilmore director of IOT and emerging technology that influx day will join me. You're watching the cube leader in tech coverage. Thanks for watching >>Time series data from sensors systems and applications is a key source in driving automation and prediction in technologies around the world. But managing the massive amount of timestamp data generated these days is overwhelming, especially at scale. That's why influx data developed influx DB, a time series data platform that collects stores and analyzes data influx DB empowers developers to extract valuable insights and turn them into action by building transformative IOT analytics and cloud native applications, purpose built and optimized to handle the scale and velocity of timestamped data. InfluxDB puts the power in your hands with developer tools that make it easy to get started quickly with less code InfluxDB is more than a database. It's a robust developer platform with integrated tooling. That's written in the languages you love. So you can innovate faster, run in flex DB anywhere you want by choosing the provider and region that best fits your needs across AWS, Microsoft Azure and Google cloud flex DB is fast and automatically scalable. So you can spend time delivering value to customers, not managing clusters, take control of your time series data. So you can focus on the features and functionalities that give your applications a competitive edge. Get started for free with influx DB, visit influx data.com/cloud to learn more. >>Okay. Now we're joined by Brian Gilmore director of IOT and emerging technologies at influx data. Welcome to the show. >>Thank you, John. Great to be here. >>We just spent some time with Evan going through the company and the value proposition, um, with influx DV, what's the momentum, where do you see this coming from? What's the value coming out of this? >>Well, I think it, we're sort of hitting a point where the technology is, is like the adoption of it is becoming mainstream. We're seeing it in all sorts of organizations, everybody from like the most well funded sort of advanced big technology companies to the smaller academics, the startups and the managing of that sort of data that emits from that technology is time series and us being able to give them a, a platform, a tool that's super easy to use, easy to start. And then of course will grow with them is, is been key to us. Sort of, you know, riding along with them is they're successful. >>Evan was mentioning that time series has been on everyone's radar and that's in the OT business for years. Now, you go back since 20 13, 14, even like five years ago that convergence of physical and digital coming together, IP enabled edge. Yeah. Edge has always been kind of hyped up, but why now? Why, why is the edge so hot right now from an adoption standpoint? Is it because it's just evolution, the tech getting better? >>I think it's, it's, it's twofold. I think that, you know, there was, I would think for some people, everybody was so focused on cloud over the last probably 10 years. Mm-hmm <affirmative> that they forgot about the compute that was available at the edge. And I think, you know, those, especially in the OT and on the factory floor who weren't able to take Avan full advantage of cloud through their applications, you know, still needed to be able to leverage that compute at the edge. I think the big thing that we're seeing now, which is interesting is, is that there's like a hybrid nature to all of these applications where there's definitely some data that's generated on the edge. There's definitely done some data that's generated in the cloud. And it's the ability for a developer to sort of like tie those two systems together and work with that data in a very unified uniform way. Um, that's giving them the opportunity to build solutions that, you know, really deliver value to whatever it is they're trying to do, whether it's, you know, the, the out reaches of outer space or whether it's optimizing the factory floor. >>Yeah. I think, I think one of the things you also mentions genome too, dig big data is coming to the real world. And I think I, OT has been kind of like this thing for OT and, and in some use case, but now with the, with the cloud, all companies have an edge strategy now. So yeah, what's the secret sauce because now this is hot, hot product for the whole world and not just industrial, but all businesses. What's the secret sauce. >>Well, I mean, I think part of it is just that the technology is becoming more capable and that's especially on the hardware side, right? I mean, like technology compute is getting smaller and smaller and smaller. And we find that by supporting all the way down to the edge, even to the micro controller layer with our, um, you know, our client libraries and then working hard to make our applications, especially the database as small as possible so that it can be located as close to sort of the point of origin of that data in the edge as possible is, is, is fantastic. Now you can take that. You can run that locally. You can do your local decision making. You can use influx DB as sort of an input to automation control the autonomy that people are trying to drive at the edge. But when you link it up with everything that's in the cloud, that's when you get all of the sort of cloud scale capabilities of parallelized, AI and machine learning and all of that. >>So what's interesting is the open source success has been something that we've talked about a lot in the cube about how people are leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, but you got developers now. Yeah. Kind of together brought that up. How do you see that emerging? How do developers engage? What are some of the things you're seeing that developers are really getting into with InfluxDB >>What's? Yeah. Well, I mean, I think there are the developers who are building companies, right? And these are the startups and the folks that we love to work with who are building new, you know, new services, new products, things like that. And, you know, especially on the consumer side of IOT, there's a lot of that, just those developers. But I think we, you gotta pay attention to those enterprise developers as well, right? There are tons of people with the, the title of engineer in, in your regular enterprise organizations. And they're there for systems integration. They're there for, you know, looking at what they would build versus what they would buy. And a lot of them come from, you know, a strong, open source background and they, they know the communities, they know the top platforms in those spaces and, and, you know, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building a brand new one. >>You know, it's interesting too, when Evan and I were talking about open source versus closed OT systems, mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens of data formats out there? Bunch of standards, protocols, new things are emerging. Everyone wants to have a control plane. Everyone wants to leverage the value of data. How do you guys keep track of it all? What do you guys support? >>Yeah, well, I mean, I think either through direct connection, like we have a product called Telegraph, it's unbelievable. It's open source, it's an edge agent. You can run it as close to the edge as you'd like, it speaks dozens of different protocols in its own, right? A couple of which MQTT B, C U a are very, very, um, applicable to these T use cases. But then we also, because we are sort of not only open source, but open in terms of our ability to collect data, we have a lot of partners who have built really great integrations from their own middleware, into influx DB. These are companies like ke wear and high bite who are really experts in those downstream industrial protocols. I mean, that's a business, not everybody wants to be in. It requires some very specialized, very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, we get the best of both worlds. The customers can use the platforms they need up to the point where they would be putting into our database. >>What's some of customer testimonies that they, that share with you. Can you share some anecdotal kind of like, wow, that's the best thing I've ever used. This really changed my business, or this is a great tech that's helped me in these other areas. What are some of the, um, soundbites you hear from customers when they're successful? >>Yeah. I mean, I think it ranges. You've got customers who are, you know, just finally being able to do the monitoring of assets, you know, sort of at the edge in the field, we have a customer who's who's has these tunnel boring machines that go deep into the earth to like drill tunnels for, for, you know, cars and, and, you know, trains and things like that. You know, they are just excited to be able to stick a database onto those tunnel, boring machines, send them into the depths of the earth and know that when they come out, all of that telemetry at a very high frequency has been like safely stored. And then it can just very quickly and instantly connect up to their, you know, centralized database. So like just having that visibility is brand new to them. And that's super important. On the other hand, we have customers who are way far beyond the monitoring use case, where they're actually using the historical records in the time series database to, um, like I think Evan mentioned like forecast things. So for predictive maintenance, being able to pull in the telemetry from the machines, but then also all of that external enrichment data, the metadata, the temperatures, the pressure is who is operating the machine, those types of things, and being able to easily integrate with platforms like Jupyter notebooks or, you know, all of those scientific computing and machine learning libraries to be able to build the models, train the models, and then they can send that information back down to InfluxDB to apply it and detect those anomalies, which >>Are, I think that's gonna be an, an area. I personally think that's a hot area because I think if you look at AI right now, yeah. It's all about training the machine learning albums after the fact. So time series becomes hugely important. Yeah. Cause now you're thinking, okay, the data matters post time. Yeah. First time. And then it gets updated the new time. Yeah. So it's like constant data cleansing data iteration, data programming. We're starting to see this new use case emerge in the data field. >>Yep. Yeah. I mean, I think you agree. Yeah, of course. Yeah. The, the ability to sort of handle those pipelines of data smartly, um, intelligently, and then to be able to do all of the things you need to do with that data in stream, um, before it hits your sort of central repository. And, and we make that really easy for customers like Telegraph, not only does it have sort of the inputs to connect up to all of those protocols and the ability to capture and connect up to the, to the partner data. But also it has a whole bunch of capabilities around being able to process that data, enrich it, reform at it, route it, do whatever you need. So at that point you're basically able to, you're playing your data in exactly the way you would wanna do it. You're routing it to different, you know, destinations and, and it's, it's, it's not something that really has been in the realm of possibility until this point. Yeah. Yeah. >>And when Evan was on it's great. He was a CEO. So he sees the big picture with customers. He was, he kinda put the package together that said, Hey, we got a system. We got customers, people are wanting to leverage our product. What's your PO they're sell. He's selling too as well. So you have that whole CEO perspective, but he brought up this notion that there's multiple personas involved in kind of the influx DB system architect. You got developers and users. Can you talk about that? Reality as customers start to commercialize and operationalize this from a commercial standpoint, you got a relationship to the cloud. Yep. The edge is there. Yep. The edge is getting super important, but cloud brings a lot of scale to the table. So what is the relationship to the cloud? Can you share your thoughts on edge and its relationship to the cloud? >>Yeah. I mean, I think edge, you know, edges, you can think of it really as like the local information, right? So it's, it's generally like compartmentalized to a point of like, you know, a single asset or a single factory align, whatever. Um, but what people do who wanna pro they wanna be able to make the decisions there at the edge locally, um, quickly minus the latency of sort of taking that large volume of data, shipping it to the cloud and doing something with it there. So we allow them to do exactly that. Then what they can do is they can actually downsample that data or they can, you know, detect like the really important metrics or the anomalies. And then they can ship that to a central database in the cloud where they can do all sorts of really interesting things with it. Like you can get that centralized view of all of your global assets. You can start to compare asset to asset, and then you can do those things like we talked about, whereas you can do predictive types of analytics or, you know, larger scale anomaly detections. >>So in this model you have a lot of commercial operations, industrial equipment. Yep. The physical plant, physical business with virtual data cloud all coming together. What's the future for InfluxDB from a tech standpoint. Cause you got open. Yep. There's an ecosystem there. Yep. You have customers who want operational reliability for sure. I mean, so you got organic <laugh> >>Yeah. Yeah. I mean, I think, you know, again, we got iPhones when everybody's waiting for flying cars. Right. So I don't know. We can like absolutely perfectly predict what's coming, but I think there are some givens and I think those givens are gonna be that the world is only gonna become more hybrid. Right. And then, you know, so we are going to have much more widely distributed, you know, situations where you have data being generated in the cloud, you have data gen being generated at the edge and then there's gonna be data generated sort sort of at all points in between like physical locations as well as things that are, that are very virtual. And I think, you know, we are, we're building some technology right now. That's going to allow, um, the concept of a database to be much more fluid and flexible, sort of more aligned with what a file would be like. >>And so being able to move data to the compute for analysis or move the compute to the data for analysis, those are the types of, of solutions that we'll be bringing to the customers sort of over the next little bit. Um, but I also think we have to start thinking about like what happens when the edge is actually off the planet. Right. I mean, we've got customers, you're gonna talk to two of them, uh, in the panel who are actually working with data that comes from like outside the earth, like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. Yeah. And, and to be able to process data like that and to do so in a way it's it's we gotta, we gotta build the fundamentals for that right now on the factory floor and in the mines and in the tunnels. Um, so that we'll be ready for that one. >>I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, this is kind of new thinking is hyper scale's always been built up full stack developers, even the old OT world, Evan was pointing out that they built everything right. And the world's going to more assembly with core competency and IP and also property being the core of their apple. So faster assembly and building, but also integration. You got all this new stuff happening. Yeah. And that's to separate out the data complexity from the app. Yes. So space genome. Yep. Driving cars throws off massive data. >>It >>Does. So is Tesla, uh, is the car the same as the data layer? >>I mean the, yeah, it's, it's certainly a point of origin. I think the thing that we wanna do is we wanna let the developers work on the world, changing problems, the things that they're trying to solve, whether it's, you know, energy or, you know, any of the other health or, you know, other challenges that these teams are, are building against. And we'll worry about that time series data and the underlying data platform so that they don't have to. Right. I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform quickly, integrate it with their data sources and the other pieces of their applications. It's going to allow them to bring much faster time to market on these products. It's gonna allow them to be more iterative. They're gonna be able to do more sort of testing and things like that. And ultimately it will, it'll accelerate the adoption and the creation of >>Technology. You mentioned earlier in, in our talk about unification of data. Yeah. How about APIs? Cuz developers love APIs in the cloud unifying APIs. How do you view view that? >>Yeah, I mean, we are APIs, that's the product itself. Like everything, people like to think of it as sort of having this nice front end, but the front end is B built on our public APIs. Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, but then data processing, data analytics, and then, you know, sort of data extraction to bring it to other platforms or other applications, microservices, whatever it might be. So, I mean, it is a world of APIs right now and you know, we, we bring a very sort of useful set of them for managing the time series data. These guys are all challenged with. It's >>Interesting. You and I were talking before we came on camera about how, um, data is, feels gonna have this kind of SRE role that DevOps had site reliability engineers, which manages a bunch of servers. There's so much data out there now. Yeah. >>Yeah. It's like reigning data for sure. And I think like that ability to be like one of the best jobs on the planet is gonna be to be able to like, sort of be that data Wrangler to be able to understand like what the data sources are, what the data formats are, how to be able to efficiently move that data from point a to point B and you know, to process it correctly so that the end users of that data aren't doing any of that sort of hard upfront preparation collection storage's >>Work. Yeah. That's data as code. I mean, data engineering is it is becoming a new discipline for sure. And, and the democratization is the benefit. Yeah. To everyone, data science get easier. I mean data science, but they wanna make it easy. Right. <laugh> yeah. They wanna do the analysis, >>Right? Yeah. I mean, I think, you know, it, it's a really good point. I think like we try to give our users as many ways as there could be possible to get data in and get data out. We sort of think about it as meeting them where they are. Right. So like we build, we have the sort of client libraries that allow them to just port to us, you know, directly from the applications and the languages that they're writing, but then they can also pull it out. And at that point nobody's gonna know the users, the end consumers of that data, better than those people who are building those applications. And so they're building these user interfaces, which are making all of that data accessible for, you know, their end users inside their organization. >>Well, Brian, great segment, great insight. Thanks for sharing all, all the complexities and, and IOT that you guys helped take away with the APIs and, and assembly and, and all the system architectures that are changing edge is real cloud is real. Yeah, absolutely. Mainstream enterprises. And you got developer attraction too, so congratulations. >>Yeah. It's >>Great. Well, thank any, any last word you wanna share >>Deal with? No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, download it, try out the open source contribute if you can. That's a, that's a huge thing. It's part of being the open source community. Um, you know, but definitely just, just use it. I think when once people use it, they try it out. They'll understand very, >>Very quickly. So open source with developers, enterprise and edge coming together all together. You're gonna hear more about that in the next segment, too. Right. Thanks for coming on. Okay. Thanks. When we return, Dave LAN will lead a panel on edge and data influx DB. You're watching the cube, the leader in high tech enterprise coverage. >>Why the startup, we move really fast. We find that in flex DB can move as fast as us. It's just a great group, very collaborative, very interested in manufacturing. And we see a bright future in working with influence. My name is Aaron Seley. I'm the CTO at HBI. Highlight's one of the first companies to focus on manufacturing data and apply the concepts of data ops, treat that as an asset to deliver to the it system, to enable applications like overall equipment effectiveness that can help the factory produce better, smarter, faster time series data. And manufacturing's really important. If you take a piece of equipment, you have the temperature pressure at the moment that you can look at to kind of see the state of what's going on. So without that context and understanding you can't do what manufacturers ultimately want to do, which is predict the future. >>Influx DB represents kind of a new way to storm time series data with some more advanced technology and more importantly, more open technologies. The other thing that influx does really well is once the data's influx, it's very easy to get out, right? They have a modern rest API and other ways to access the data. That would be much more difficult to do integrations with classic historians highlight can serve to model data, aggregate data on the shop floor from a multitude of sources, whether that be P C U a servers, manufacturing execution systems, E R P et cetera, and then push that seamlessly into influx to then be able to run calculations. Manufacturing is changing this industrial 4.0, and what we're seeing is influx being part of that equation. Being used to store data off the unified name space, we recommend InfluxDB all the time to customers that are exploring a new way to share data manufacturing called the unified name space who have open questions around how do I share this new data that's coming through my UNS or my QTT broker? How do I store this and be able to query it over time? And we often point to influx as a solution for that is a great brand. It's a great group of people and it's a great technology. >>Okay. We're now going to go into the customer panel and we'd like to welcome Angelo Fasi. Who's a software engineer at the Vera C Ruben observatory in Caleb McLaughlin whose senior spacecraft operations software engineer at loft orbital guys. Thanks for joining us. You don't wanna miss folks this interview, Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. I mean, there, of course doing that is, is highly complex and not a cheap endeavor. Tell us about loft Orbi and what you guys do to attack that problem. >>Yeah, absolutely. And, uh, thanks for having me here by the way. Uh, so loft orbital is a, uh, company. That's a series B startup now, uh, who and our mission basically is to provide, uh, rapid access to space for all kinds of customers. Uh, historically if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, you know, have a big software teams, uh, and then eventually worry about, you know, a bunch like just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as, you know, deploying a VM in, uh, AWS or GCP is getting your, uh, programs, your mission deployed on orbit, uh, with access to, you know, different sensors, uh, cameras, radios, stuff like that. >>So that's, that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. Uh, there's a really cool company called, uh, totem labs who is working on building, uh, IOT cons, an IOT constellation for in of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor T, which means you have this little modem inside a container container that you, that you track from anywhere in the world as it's going across the ocean. Um, so they're, it's really little and they've been able to stay a small startup that's focused on their product, which is the, uh, that super crazy complicated, cool radio while we handle the whole space segment for them, which just, you know, before loft was really impossible. So that's, our mission is, uh, providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers with all kinds of different missions, um, and obviously generating a ton of data in space, uh, that we've gotta handle. Yeah. >>So amazing Caleb, what you guys do, I, now I know you were lured to the skies very early in your career, but how did you kinda land on this business? >>Yeah, so, you know, I've, I guess just a little bit about me for some people, you know, they don't necessarily know what they wanna do like early in their life. For me, I was five years old and I knew, you know, I want to be in the space industry. So, you know, I started in the air force, but have, uh, stayed in the space industry, my whole career and been a part of, uh, this is the fifth space startup that I've been a part of actually. So, you know, I've, I've, uh, kind of started out in satellites, did spent some time in working in, uh, the launch industry on rockets. Then, uh, now I'm here back in satellites and you know, honestly, this is the most exciting of the difference based startups. That I've been a part of >>Super interesting. Okay. Angelo, let's, let's talk about the Ruben observatory, ver C Ruben, famous woman scientist, you know, galaxy guru. Now you guys the observatory, you're up way up high. You're gonna get a good look at the Southern sky. Now I know COVID slowed you guys down a bit, but no doubt. You continued to code away on the software. I know you're getting close. You gotta be super excited. Give us the update on, on the observatory and your role. >>All right. So yeah, Rubin is a state of the art observatory that, uh, is in construction on a remote mountain in Chile. And, um, with Rubin, we conduct the, uh, large survey of space and time we are going to observe the sky with, uh, eight meter optical telescope and take, uh, a thousand pictures every night with a 3.2 gig up peaks of camera. And we are going to do that for 10 years, which is the duration of the survey. >>Yeah. Amazing project. Now you, you were a doctor of philosophy, so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, in astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >>Yeah, that's that's right. Uh, about 15 years, um, I studied physics in college, then I, um, got a PhD in astronomy and, uh, I worked for about five years in another project. Um, the dark energy survey before joining rubing in 2015. >>Yeah. Impressive. So it seems like you both, you know, your organizations are looking at space from two different angles. One thing you guys both have in common of course is, is, is software. And you both use InfluxDB as part of your, your data infrastructure. How did you discover influx DB get into it? How do you use the platform? Maybe Caleb, you could start. >>Uh, yeah, absolutely. So the first company that I extensively used, uh, influx DBN was a launch startup called, uh, Astra. And we were in the process of, uh, designing our, you know, our first generation rocket there and testing the engines, pumps, everything that goes into a rocket. Uh, and when I joined the company, our data story was not, uh, very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. Um, and at first there, you know, that's the way that a lot of engineers and scientists are used to working. Um, and at first that was, uh, like people weren't entirely sure that that was a, um, that that needed to change, but it's something the nice thing about InfluxDB is that, you know, it's so easy to deploy. So as the, our software engineering team was able to get it deployed and, you know, up and running very quickly and then quickly also backport all of the data that we collected thus far into influx and what, uh, was amazing to see. >>And as kind of the, the super cool moment with influx is, um, when we hooked that up to Grafana Grafana as the visualization platform we used with influx, cuz it works really well with it. Uh, there was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data where they could just almost instantly easily discover data that they hadn't been able to see before and take the manual processes that they would run after a test and just throw those all in influx and have live data as tests were coming. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it just was totally game changing for how we tested. >>So Angelo, I was explaining in my open, you know, you could, you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about, and the example of the Caleb just gave you, I mean, you have to have a purpose built time series database, where did you first learn about influx DB? >>Yeah, correct. So I work with the data management team, uh, and my first project was the record metrics that measured the performance of our software, uh, the software that we used to process the data. So I started implementing that in a relational database. Um, but then I realized that in fact, I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found influx B. And that was, uh, back in 2018. The another use for influx DB that I'm also interested is the visits database. Um, if you think about the observations we are moving the telescope all the time in pointing to specific directions, uh, in the Skype and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, uh, we call a visit. So we want to record the metadata about those visits and flex to, uh, that time here is going to be 10 years long, um, with about, uh, 1000 points every night. It's actually not too much data compared to other, other problems. It's, uh, really just a different, uh, time scale. >>The telescope at the Ruben observatory is like pun intended, I guess the star of the show. And I, I believe I read that it's gonna be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hub's widest camera view, which is amazing, right? That's like 40 moons in, in an image amazingly fast as well. What else can you tell us about the telescope? >>Um, this telescope, it has to move really fast and it also has to carry, uh, the primary mirror, which is an eight meter piece of glass. It's very heavy and it has to carry a camera, which has about the size of a small car. And this whole structure weighs about 300 tons for that to work. Uh, the telescope needs to be, uh, very compact and stiff. Uh, and one thing that's amazing about it's design is that the telescope, um, is 300 tons structure. It sits on a tiny film of oil, which has the diameter of, uh, human hair. And that makes an almost zero friction interface. In fact, a few people can move these enormous structure with only their hands. Uh, as you said, uh, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, uh, in diameter the size of about seven full moons. And, uh, with that, we can map the entire sky in only, uh, three days. And of course doing operations everything's, uh, controlled by software and it is automatic. Um there's a very complex piece of software, uh, called the scheduler, which is responsible for moving the telescope, um, and the camera, which is, uh, recording 15 terabytes of data every night. >>Hmm. And, and, and Angela, all this data lands in influx DB. Correct. And what are you doing with, with all that data? >>Yeah, actually not. Um, so we are using flex DB to record engineering data and metadata about the observations like telemetry events and commands from the telescope. That's a much smaller data set compared to the images, but it is still challenging because, uh, you, you have some high frequency data, uh, that the system needs to keep up and we need to, to start this data and have it around for the lifetime of the price. Mm, >>Got it. Thank you. Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher size satellites. You're kind of using a multi-tenant model. I think it's genius, but, but tell us about the satellites themselves. >>Yeah, absolutely. So, uh, we have in space, some satellites already that as you said, are like dishwasher, mini fridge kind of size. Um, and we're working on a bunch more that are, you know, a variety of sizes from shoebox to, I guess, a few times larger than what we have today. Uh, and it is, we do shoot to have effectively something like a multi-tenant model where, uh, we will buy a bus off the shelf. The bus is, uh, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power. It has the solar panels, it has some radios attached to it. Uh, it handles the attitude control, basically steers the spacecraft in orbit. And then we build also in house, what we call our payload hub, which is, has all, any customer payloads attached and our own kind of edge processing sort of capabilities built into it. >>And, uh, so we integrate that. We launch it, uh, and those things, because they're in lower orbit, they're orbiting the earth every 90 minutes. That's, you know, seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have, uh, one of the unique challenges of operating spacecraft and lower orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time, uh, where we get to talk to them through our ground sites, either in Antarctica or, you know, in the north pole region. >>Talk more about how you use influx DB to make sense of this data through all this tech that you're launching into space. >>We basically previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was, uh, so slow in the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. Uh, so we migrated to influx DB to store our time series telemetry from the spacecraft. So, you know, that's things like, uh, power level voltage, um, currents counts, whatever, whatever metadata we need to monitor about the spacecraft. We now store that in, uh, in influx DB. Uh, and that has, you know, now we can actually easily store the entire volume of data for the mission life so far without having to worry about, you know, the size bloating to an unmanageable amount. >>And we can also seamlessly query, uh, large chunks of data. Like if I need to see, you know, for example, as an operator, I might wanna see how my, uh, battery state of charge is evolving over the course of the year. I can have a plot and an influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent, um, I can intelligently group the data by, uh, sliding time interval. Uh, so, you know, it's been extremely powerful for us to access the data and, you know, as time has gone on, we've gradually migrated more and more of our operating data into influx. >>You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, a lot of companies say, oh, yes, we're data driven, but you guys really are. I mean, you' got data at the core, Caleb, what does that, what does that mean to you? >>Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astro where our engineer's feedback loop went from, you know, a lot of kind of slow researching, digging into the data to like an instant instantaneous, almost seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. Um, but to give another practical example, uh, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all of that data almost instantaneously and provide it to the operator. And near real time, you know, about a second worth of latency is all that's acceptable for us to react to, to see what is coming down from the spacecraft and building that pipeline is challenging from a software engineering standpoint. >>Um, our primary language is Python, which isn't necessarily that fast. So what we've done is started, you know, in the, in the goal of being data driven is publish metrics on individual, uh, how individual pieces of our data processing pipeline are performing into influx as well. And we do that in production as well as in dev. Uh, so we have kind of a production monitoring, uh, flow. And what that has done is allow us to make intelligent decisions on our software development roadmap, where it makes the most sense for us to, uh, focus our development efforts in terms of improving our software efficiency. Uh, just because we have that visibility into where the real problems are. Um, it's sometimes we've found ourselves before we started doing this kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. Uh, but now, now that we're being a bit more data driven, there we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale to, from supporting a couple satellites, to supporting many, many satellites at >>Once. Yeah. Coach. So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means to, to you and your teams? >>I would say that, um, having, uh, real time visibility, uh, to the telemetry data and, and metrics is, is, is crucial for us. We, we need, we need to make sure that the image that we collect with the telescope, uh, have good quality and, um, that they are within the specifications, uh, to meet our science goals. And so if they are not, uh, we want to know that as soon as possible and then, uh, start fixing problems. >>Caleb, what are your sort of event, you know, intervals like? >>So I would say that, you know, as of today on the spacecraft, the event, the, the level of timing that we deal with probably tops out at about, uh, 20 Hertz, 20 measurements per second on, uh, things like our, uh, gyroscopes, but the, you know, I think the, the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give an example, uh, from when I worked at, on the rocket at Astra there, our baseline data rate that we would ingest data during a test is, uh, 500 Hertz. So 500 samples per second. And in some cases we would actually, uh, need to ingest much higher rate data, even up to like 1.5 kilohertz. So, uh, extremely, extremely high precision, uh, data there where timing really matters a lot. And, uh, you know, I can, one of the really powerful things about influx is the fact that it can handle this. >>That's one of the reasons we chose it, uh, because there's times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job, we often zoom out to look, look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second. And you need to see same thing as Angela just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, Hey, I opened this valve at exactly this time and that goes, we wanna have that at, you know, micro or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, was that before or after this valve open, those kind of, uh, that kind of visibility is critical in these kind of scientific, uh, applications and absolutely game changing to be able to see that in, uh, near real time and, uh, with a really easy way for engineers to be able to visualize this data themselves without having to wait for, uh, software engineers to go build it for them. >>Can the scientists do self-serve or are you, do you have to design and build all the analytics and, and queries for your >>Scientists? Well, I think that's, that's absolutely from, from my perspective, that's absolutely one of the best things about influx and what I've seen be game changing is that, uh, generally I'd say anyone can learn to use influx. Um, and honestly, most of our users might not even know they're using influx, um, because what this, the interface that we expose to them is Grafana, which is, um, a generic graphing, uh, open source graphing library that is very similar to influx own chronograph. Sure. And what it does is, uh, let it provides this, uh, almost it's a very intuitive UI for building your queries. So you choose a measurement and it shows a dropdown of available measurements. And then you choose a particular, the particular field you wanna look at. And again, that's a dropdown, so it's really easy for our users to discover. And there's kind of point and click options for doing math aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality of the influx provides putting >>Data in the hands of those, you know, who have the context of domain experts is, is key. Angela, is it the same situation for you? Is it self serve? >>Yeah, correct. Uh, as I mentioned before, um, we have the astronomers making their own dashboards because they know what exactly what they, they need to, to visualize. Yeah. I mean, it's all about using the right tool for the job. I think, uh, for us, when I joined the company, we weren't using influx DB and we, we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations >>Guys. This has been really formative it's, it's pretty exciting to see how the edge is mountaintops, lower orbits to be space is the ultimate edge. Isn't it. I wonder if you could answer two questions to, to wrap here, you know, what comes next for you guys? Uh, and is there something that you're really excited about that, that you're working on Caleb, maybe you could go first and an Angela, you can bring us home. >>Uh, basically what's next for loft. Orbital is more, more satellites, a greater push towards infrastructure and really making, you know, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, uh, making that happen, it's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole, because there are so many interesting applications out there. So many cool ways of leveraging space that, uh, people are taking advantage of. And with, uh, companies like SpaceX and the now rapidly lowering cost, cost of launch, it's just a really exciting place to be. And we're launching more satellites. We are scaling up for some constellations and our ground system has to be improved to match. So there's a lot of, uh, improvements that we're working on to really scale up our control software, to be best in class and, uh, make it capable of handling such a large workload. So >>You guys hiring >><laugh>, we are absolutely hiring. So, uh, I would in we're we need, we have PE positions all over the company. So, uh, we need software engineers. We need people who do more aerospace, specific stuff. So, uh, absolutely. I'd encourage anyone to check out the loft orbital website, if there's, if this is at all interesting. >>All right. Angela, bring us home. >>Yeah. So what's next for us is really, uh, getting this, um, telescope working and collecting data. And when that's happen is going to be just, um, the Lu of data coming out of this camera and handling all, uh, that data is going to be really challenging. Uh, yeah. I wanna wanna be here for that. <laugh> I'm looking forward, uh, like for next year we have like an important milestone, which is our, um, commissioning camera, which is a simplified version of the, of the full camera it's going to be on sky. And so yeah, most of the system has to be working by them. >>Nice. All right, guys, you know, with that, we're gonna end it. Thank you so much, really fascinating, and thanks to influx DB for making this possible, really groundbreaking stuff, enabling value creation at the edge, you know, in the cloud and of course, beyond at the space. So really transformational work that you guys are doing. So congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave ante, and you're watching the cube, the leader in high tech enterprise coverage. >>Welcome Telegraph is a popular open source data collection. Agent Telegraph collects data from hundreds of systems like IOT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists to large corporate teams. The Telegraph project has a very welcoming and active open source community. Learn how to get involved by visiting the Telegraph GitHub page, whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraph. We'd love to hear what you're building. >>Thanks for watching. Moving the world with influx DB made possible by influx data. I hope you learn some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you wanna scale cost effectively with the highest performance and you're analyzing metrics and data over time times, series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link and the resources below. Remember all these recordings are gonna be available on demand of the cube.net and influx data.com. So check those out and poke around influx data. They are the folks behind InfluxDB and one of the leaders in the space, we hope you enjoyed the program. This is Dave Valante for the cube. We'll see you soon.

Published Date : May 12 2022

SUMMARY :

case that anyone can relate to and you can build timestamps into Now, the problem with the latter example that I just gave you is that you gotta hunt As I just explained, we have an exciting program for you today, and we're And then we bring it back here Thanks for coming on. What is the story? And, and he basically, you know, from my point of view, he invented modern time series, Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people relational database is the one database to rule the world. And then you get the data lake. So And so you get to these applications Isn't good enough when you need real time. It's like having the feature for, you know, you buy a new television, So this is a big part of how we're seeing with people saying, Hey, you know, And so you get the dynamic of, you know, of constantly instrumenting watching the What are you seeing for your, with in, with influx DB, So a lot, you know, Tesla, lucid, motors, Cola, You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary And so the developer, So let's get to the developer real quick, real highlight point here is the data. So to a degree that you are moving your service, So when you bring in kind of old way, new way old way was you know, the best of the open source world. They have faster time to market cuz they're assembling way faster and they get to still is what we like to think of it. I mean systems, uh, uh, systems have consequences when you make changes. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in So I'll have to ask you if I'm the customer. Because now I have to make these architectural decisions, as you mentioned, And so that's what you started building. And since I have a PO for you and a big check, yeah. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build What would you say to someone looking to do something in time series on edge? in the build business of building systems that you want 'em to be increasingly intelligent, Brian Gilmore director of IOT and emerging technology that influx day will join me. So you can focus on the Welcome to the show. Sort of, you know, riding along with them is they're successful. Now, you go back since 20 13, 14, even like five years ago that convergence of physical And I think, you know, those, especially in the OT and on the factory floor who weren't able And I think I, OT has been kind of like this thing for OT and, you know, our client libraries and then working hard to make our applications, leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, What are some of the, um, soundbites you hear from customers when they're successful? machines that go deep into the earth to like drill tunnels for, for, you know, I personally think that's a hot area because I think if you look at AI right all of the things you need to do with that data in stream, um, before it hits your sort of central repository. So you have that whole CEO perspective, but he brought up this notion that You can start to compare asset to asset, and then you can do those things like we talked about, So in this model you have a lot of commercial operations, industrial equipment. And I think, you know, we are, we're building some technology right now. like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform How do you view view that? Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, There's so much data out there now. that data from point a to point B and you know, to process it correctly so that the end And, and the democratization is the benefit. allow them to just port to us, you know, directly from the applications and the languages Thanks for sharing all, all the complexities and, and IOT that you Well, thank any, any last word you wanna share No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, You're gonna hear more about that in the next segment, too. the moment that you can look at to kind of see the state of what's going on. And we often point to influx as a solution Tell us about loft Orbi and what you guys do to attack that problem. So that it's almost as simple as, you know, We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers and I knew, you know, I want to be in the space industry. famous woman scientist, you know, galaxy guru. And we are going to do that for 10 so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, Um, the dark energy survey So it seems like you both, you know, your organizations are looking at space from two different angles. something the nice thing about InfluxDB is that, you know, it's so easy to deploy. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it Um, if you think about the observations we are moving the telescope all the And I, I believe I read that it's gonna be the first of the next Uh, the telescope needs to be, And what are you doing with, compared to the images, but it is still challenging because, uh, you, you have some Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher and we're working on a bunch more that are, you know, a variety of sizes from shoebox sites, either in Antarctica or, you know, in the north pole region. Talk more about how you use influx DB to make sense of this data through all this tech that you're launching of data for the mission life so far without having to worry about, you know, the size bloating to an Like if I need to see, you know, for example, as an operator, I might wanna see how my, You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, And near real time, you know, about a second worth of latency is all that's acceptable for us to react you know, in the, in the goal of being data driven is publish metrics on individual, So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means And so if they are not, So I would say that, you know, as of today on the spacecraft, the event, so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, the particular field you wanna look at. Data in the hands of those, you know, who have the context of domain experts is, issues of the database growing to an incredible size extremely quickly, and being two questions to, to wrap here, you know, what comes next for you guys? a greater push towards infrastructure and really making, you know, So, uh, we need software engineers. Angela, bring us home. And so yeah, most of the system has to be working by them. at the edge, you know, in the cloud and of course, beyond at the space. involved by visiting the Telegraph GitHub page, whether you want to contribute code, and one of the leaders in the space, we hope you enjoyed the program.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

JohnPERSON

0.99+

AngelaPERSON

0.99+

EvanPERSON

0.99+

2015DATE

0.99+

SpaceXORGANIZATION

0.99+

2016DATE

0.99+

Dave ValantePERSON

0.99+

AntarcticaLOCATION

0.99+

BoeingORGANIZATION

0.99+

CalebPERSON

0.99+

10 yearsQUANTITY

0.99+

ChileLOCATION

0.99+

BrianPERSON

0.99+

AmazonORGANIZATION

0.99+

Evan KaplanPERSON

0.99+

Aaron SeleyPERSON

0.99+

Angelo FasiPERSON

0.99+

2013DATE

0.99+

PaulPERSON

0.99+

TeslaORGANIZATION

0.99+

2018DATE

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two questionsQUANTITY

0.99+

Caleb McLaughlinPERSON

0.99+

40 moonsQUANTITY

0.99+

two systemsQUANTITY

0.99+

twoQUANTITY

0.99+

AngeloPERSON

0.99+

230QUANTITY

0.99+

300 tonsQUANTITY

0.99+

threeQUANTITY

0.99+

500 HertzQUANTITY

0.99+

3.2 gigQUANTITY

0.99+

15 terabytesQUANTITY

0.99+

eight meterQUANTITY

0.99+

two practitionersQUANTITY

0.99+

20 HertzQUANTITY

0.99+

25 yearsQUANTITY

0.99+

TodayDATE

0.99+

Palo AltoLOCATION

0.99+

PythonTITLE

0.99+

OracleORGANIZATION

0.99+

Paul dicksPERSON

0.99+

FirstQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

earthLOCATION

0.99+

240 peopleQUANTITY

0.99+

three daysQUANTITY

0.99+

appleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

HBIORGANIZATION

0.99+

Dave LANPERSON

0.99+

todayDATE

0.99+

each imageQUANTITY

0.99+

next yearDATE

0.99+

cube.netOTHER

0.99+

InfluxDBTITLE

0.99+

oneQUANTITY

0.98+

1000 pointsQUANTITY

0.98+

Moving The World With InfluxDB


 

(upbeat music) >> Okay, we're now going to go into the customer panel. And we'd like to welcome Angelo Fausti, who's software engineer at the Vera C Rubin Observatory, and Caleb Maclachlan, who's senior spacecraft operations software engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss folks, this interview. Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. Cause doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem? >> Yeah, absolutely. And thanks for having me here, by the way. So Loft Orbital is a company that's a series B startup now. And our mission basically is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have big software teams, and then eventually worry about a lot of very specialized engineering. And what we're trying to do is, change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP, as getting your programs, your mission deployed on orbit, with access to different sensors, cameras, radios, stuff like that. So that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. There's a really cool company called Totum labs, who is working on building an IoT constellation, for Internet of Things. Basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT, which means you have this little modem inside a container. A container that you track from anywhere on the world as it's going across the ocean. So it's really little. And they've been able to stay small startup that's focused on their product, which is that super crazy, complicated, cool radio, while we handle the whole space segment for them, which just, before Loft was really impossible. So that's our mission is, providing space infrastructure as a service. We are kind of groundbreaking in this area, and we're serving a huge variety of customers with all kinds of different missions, and obviously, generating a ton of data in space that we've got to handle. >> Yeah, so amazing, Caleb, what you guys do. I know you were lured to the skies very early in your career, but how did you kind of land in this business? >> Yeah, so I guess just a little bit about me. For some people, they don't necessarily know what they want to do, early in their life. For me, I was five years old and I knew, I want to be in the space industry. So I started in the Air Force, but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of, actually. So I've kind of started out in satellites, did spend some time in working in the launch industry on rockets. Now I'm here back in satellites. And honestly, this is the most exciting of the different space startups that I've been a part of. So, always been passionate about space and basically writing software for operating in space for basically extending how we write software into orbit. >> Super interesting. Okay, Angelo. Let's talk about the Rubin Observatory Vera C. Rubin, famous woman scientists, Galaxy guru, Now you guys, the observatory are up, way up high, you're going to get a good look at the southern sky. I know COVID slowed you guys down a bit. But no doubt you continue to code away on the software. I know you're getting close. You got to be super excited. Give us the update on the observatory and your role. >> All right. So yeah, Rubin is state of the art observatory that is in construction on a remote mountain in Chile. And with Rubin we'll conduct the large survey of space and time. We are going to observe the sky with eight meter optical telescope and take 1000 pictures every night with 3.2 gigapixel camera. And we're going to do that for 10 years, which is the duration of the survey. The goal is to produce an unprecedented data set. Which is going to be about .5 exabytes of image data. And from these images will detect and measure the properties of billions of astronomical objects. We are also building a science platform that's hosted on Google Cloud, so that the scientists and the public can explore this data to make discoveries. >> Yeah, amazing project. Now, you aren't a Doctor of Philosophy. So you probably spent some time thinking about what's out there. And then you went on to earn a PhD in astronomy and astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >> Yeah, that's right. About 15 years. I studied physics in college, then I got a PhD in astronomy. And I worked for about five years in another project, the Dark Energy survey before joining Rubin in 2015. >> Yeah, impressive. So it seems like both your organizations are looking at space from two different angles. One thing you guys both have in common, of course, is software. And you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb, you can start. >> Yeah, absolutely. So the first company that I extensively used InfluxDB in was a launch startup called Astra. And we were in the process of designing our first generation rocket there and testing the engines, pumps. Everything that goes into a rocket. And when I joined the company, our data story was not very mature. We were collecting a bunch of data in LabVIEW. And engineers were taking that over to MATLAB to process it. And at first, that's the way that a lot of engineers and scientists are used to working. And at first that was, like, people weren't entirely sure that, that needed to change. But it's something, the nice thing about InfluxDB is that, it's so easy to deploy. So our software engineering team was able to get it deployed and up and running very quickly and then quickly also backport all of the data that we've collected thus far into Influx. And what was amazing to see and it's kind of the super cool moment with Influx is, when we hooked that up to Grafana, Grafana, is the visualization platform we use with influx, because it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data, where they could just almost instantly, easily discover data that they hadn't been able to see before. And take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming. And I saw them implementing crazy rocket equation type stuff in Influx and it just was totally game changing for how we tested. And things that previously it would be like run a test, then wait an hour for the engineers to crunch the data and then we run another test with some changed parameters or a changed startup sequence or something like that, became, by the time the test is over, the engineers know what the next step is, because they have this just like instant game changing access to data. So since that experience, basically everywhere I've gone, every company since then, I've been promoting InfluxDB and using it and spinning it up and quickly showing people how simple and easy it is. >> Yeah, thank you. So Angelo, I was explaining in my open that, you know you could add a column in a traditional RDBMS and do time series. But with the volume of data that you're talking about in the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? >> Yeah, correct. So I worked with the data management team and my first project was the record metrics that measure the performance of our software. The software that we use to process the data. So I started implementing that in our relational database. But then I realized that in fact, I was dealing with time series data. And I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB, that was back in 2018. Then I got involved in another project. To record telemetry data from the telescope itself. It's very challenging because you have so many subsystems and sensors, producing data. And with that data, the goal is to look at the telescope harder in real time so we can make decisions and make sure that everything's doing the right thing. And another use for InfluxDB that I'm also interested, is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. And every point in the time series, we call that visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long, with about 1000 points every night. It's actually not too much data compared to the other problems. It's really just the different time scale. So yeah, we have plans on continuing using InfluxDB and finding new applications in the project. >> Yeah and the speed with which you can actually get high quality images. Angelo, my understanding is, you use InfluxDB, as you said, you're monitoring the telescope hardware and the software. And just say, some of the scientific data as well. The telescope at the Rubin Observatory is like, no pun intended, I guess, the star of the show. And I believe, I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. That's like 40 moons in an image, and amazingly fast as well. What else can you tell us about the telescope? >> Yeah, so it's really a challenging project, from the point of view of engineering. This telescope, it has to move really fast. And it also has to carry the primary mirror, which is an eight meter piece of glass, it's very heavy. And it has to carry a camera, which is about the size of a small car. And this whole structure weighs about 300 pounds. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about its design is that the telescope, this 300 tons structure, it sits on a tiny film of oil, which has the diameter of human hair, in that brings an almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, in diameter, the size of about seven full moons. And with that we can map the entire sky in only three days. And of course, during operations, everything's controlled by software, and it's automatic. There's a very complex piece of software called the scheduler, which is responsible for moving the telescope and the camera. Which will record the 15 terabytes of data every night. >> And Angelo, all this data lands in InfluxDB, correct? And what are you doing with all that data? >> Yeah, actually not. So we're using InfluxDB to record engineering data and metadata about the observations, like telemetry events and the commands from the telescope. That's a much smaller data set compared to the images. But it is still challenging because you have some high frequency data that the system needs to keep up and we need to store this data and have it around for the lifetime of the project. >> Hm. So at the mountain, we keep the data for 30 days. So the observers, they use Influx and InfluxDB instance, running there to analyze the data. But we also replicate the data to another instance running at the US data facility, where we have more computational resources and so more people can look at the data without interfering with the observations. Yeah, I have to say that InfluxDB has been really instrumental for us, and especially at this phase of the project where we are testing and integrating the different pieces of hardware. And it's not just the database, right. It's the whole platform. So I like to give this example, when we are doing this kind of task, it's hard to know in advance which dashboards and visualizations you're going to need, right. So what you really need is a data exploration tool. And with tools like chronograph, for example, having the ability to query and create dashboards on the fly was really a game changer for us. So astronomers, they typically are not software engineers, but they are the ones that know better than anyone, what needs to be monitored. And so they use chronograph and they can create the dashboards and the visualizations that they need. >> Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about, you got these dishwasher size satellites are kind of using a multi tenant model. I think it's genius. But tell us about the satellites themselves. >> Yeah, absolutely. So we have in space, some satellites already. That, as you said, are like dishwasher, mini fridge kind of size. And we're working on a bunch more that are a variety of sizes from shoe box to I guess, a few times larger than what we have today. And it is, we do shoot to have, effectively something like a multi tenant model where we will buy a bus off the shelf, the bus is, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something. Where it's providing the power, it has the solar panels, it has some radios attached to it, it handles the altitude control, basically steers the spacecraft in orbit. And then we build, also in house, what we call our payload hub, which is has all any customer payloads attached, and our own kind of edge processing sort of capabilities built into it. And so we integrate that, we launch it, and those things, because they're in low Earth orbit, they're orbiting the Earth every 90 minutes. That's seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have one of the unique challenges of operating spacecraft in lower Earth orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time. Where we get to talk to them through our ground sites, either in Antarctica or in the North Pole region. So we'll see them for 10 minutes, and then we won't see them for the next 90 minutes as they zip around the Earth collecting data. So one of the challenges that exists for a company like ours is, that's a lot of, you have to be able to make real time decisions operationally, in those short windows that can sometimes be critical to the health and safety of the spacecraft. And it could be possible that we put ourselves into a low power state in the previous orbit or something potentially dangerous to the satellite can occur. And so as an operator, you need to very quickly process that data coming in. And not just the the live data, but also the massive amounts of data that were collected in, what we call the back orbit, which is the time that we couldn't see the spacecraft. >> We got it. So talk more about how you use InfluxDB to make sense of this data from all those tech that you're launching into space. >> Yeah, so we basically, previously we started off, when I joined the company, storing all of that, as Angelo did, in a regular relational database. And we found that it was so slow, and the size of our data would balloon over the course of a couple of days to the point where we weren't able to even store all of the data that we were getting. So we migrated to InfluxDB to store our time series telemetry from the spacecraft. So that thing's like power level voltage, currents counts, whatever metadata we need to monitor about the spacecraft, we now store that in InfluxDB. And that has, you know, now we can actually easily store the entire volume of data for the mission life so far, without having to worry about the size bloating to an unmanageable amount. And we can also seamlessly query large chunks of data, like if I need to see, for example, as an operator, I might want to see how my battery state of charge is evolving over the course of the year, I can have a plot in an Influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent. I can intelligently group the data by citing time interval. So it's been extremely powerful for us to access the data. And as time has gone on, we've gradually migrated more and more of our operating data into Influx. So not only do we store the basic telemetry about the bus and our payload hub, but we're also storing data for our customers, that our customers are generating on board about things like you know, one example of a customer that's doing something pretty cool. They have a computer on our satellite, which they can reprogram themselves to do some AI enabled edge compute type capability in space. And so they're sending us some metrics about the status of their workloads, in addition to the basics, like the temperature of their payload, their computer or whatever else. And we're delivering that data to them through Influx in a Grafana dashboard that they can plot where they can see, not only has this pipeline succeeded or failed, but also where was the spacecraft when this occurred? What was the voltage being supplied to their payload? Whatever they need to see, it's all right there for them. Because we're aggregating all that data in InfluxDB. >> That's awesome. You're measuring everything. Let's talk a little bit about, we throw this term around a lot, data driven. A lot of companies say, Oh, yes, we're data driven. But you guys really are. I mean, you got data at the core. Caleb, what does that what does that mean to you? >> Yeah, so you know, I think, the clearest example of when I saw this, be like totally game changing is, what I mentioned before it, at Astra, were our engineers feedback loop went from a lot of, kind of slow researching, digging into the data to like an instant, instantaneous, almost, Seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. But to give another practical example, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all that data almost instantaneously and provide it to the operator in near real time. About a second worth of latency is all that's acceptable for us to react to. To see what is coming down from the spacecraft and building that pipeline is challenging, from a software engineering standpoint. Our primary language is Python, which isn't necessarily that fast. So what we've done is started, in the in the goal being data driven, is publish metrics on individual, how individual pieces of our data processing pipeline, are performing into Influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow. And what that has done is, allow us to make intelligent decisions on our software development roadmap. Where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency, just because we have that visibility into where the real problems are. At sometimes we've found ourselves, before we started doing this, kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now, that we're being a bit more data driven, there, we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scaled from supporting a couple of satellites to supporting many, many satellites at once. >> So you reduce those dead ends. Maybe Angela, you could talk about what sort of data driven means to you and your team? >> Yeah, I would say that having real time visibility, to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect, with the telescope have good quality and that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible, and then start fixing problems. >> Yeah, so I mean, you think about these big science use cases, Angelo. They are extremely high precision, you have to have a lot of granularity, very tight tolerances. How does that play into your time series data strategy? >> Yeah, so one of the subsystems that produce the high volume and high rates is the structure that supports the telescope's primary mirror. So on that structure, we have hundreds of actuators that compensate the shape of the mirror for the formations. That's part of our active updated system. So that's really real time. And we have to record this high data rates, and we have requirements to handle data that are a few 100 hertz. So we can easily configure our database with milliseconds precision, that's for telemetry data. But for events, sometimes we have events that are very close to each other and then we need to configure database with higher precision. >> um hm For example, micro seconds. >> Yeah, so Caleb, what are your event intervals like? >> So I would say that, as of today on the spacecraft, the event, the level of timing that we deal with probably tops out at about 20 hertz, 20 measurements per second on things like our gyroscopes. But I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give you an example, from when I worked on the rockets at Astra. There, our baseline data rate that we would ingest data during a test is 500 hertz, so 500 samples per second. And in some cases, we would actually need to ingest much higher rate data. Even up to like 1.5 kilohertz. So extremely, extremely high precision data there, where timing really matters a lot. And, I can, one of the really powerful things about Influx is the fact that it can handle this, that's one of the reasons we chose it. Because there's times when we're looking at the results of firing, where you're zooming in. I've talked earlier about how on my current job, we often zoom out to look at a year's worth of data. You're zooming in, to where your screen is preoccupied by a tiny fraction of a second. And you need to see, same thing, as Angelo just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, hey, I opened this valve at exactly this time. And that goes, we want to have that at micro or even nanosecond precision, so that we know, okay, we saw a spike in chamber pressure at this exact moment, was that before or after this valve open? That kind of visibility is critical in these kinds of scientific applications and absolutely game changing, to be able to see that in near real time. And with a really easy way for engineers to be able to visualize this data themselves without having to wait for us software engineers to go build it for them. >> Can the scientists do self serve? Or do you have to design and build all the analytics and queries for scientists? >> I think that's absolutely from my perspective, that's absolutely one of the best things about Influx, and what I've seen be game changing is that, generally, I'd say anyone can learn to use Influx. And honestly, most of our users might not even know they're using Influx. Because the interface that we expose to them is Grafana, which is generic graphing, open source graphing library that is very similar to Influx zone chronograph. >> Sure. >> And what it does is, it provides this, almost, it's a very intuitive UI for building your query. So you choose a measurement, and it shows a drop down of available measurements, and then you choose the particular field you want to look at. And again, that's a drop down. So it's really easy for our users to discover it. And there's kind of point and click options for doing math, aggregations. You can even do like, perfect kind of predictions all within Grafana. The Grafana user interface, which is really just a wrapper around the API's and functionality that Influx provides. So yes, absolutely, that's been the most powerful thing about it, is that it gets us out of the way, us software engineers, who may not know quite as much as the scientists and engineers that are closer to the interesting math. And they build these crazy dashboards that I'm just like, wow, I had no idea you could do that. I had no idea that, that is something that you would want to see. And absolutely, that's the most empowering piece. >> Yeah, putting data in the hands of those who have the context, the domain experts is key. Angelo is it the same situation for you? Is it self serve? >> Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards, because they know exactly what they need to visualize. And I have an example just from last week. We had an engineer at the observatory that was building a dashboard to monitor the cooling system of the entire building. And he was familiar with InfluxQL, which was the primarily query language in version one of InfluxDB. And he had, that was really a challenge because he had all the data spread at multiple InfluxDB measurements. And he was like doing one query for each measurement and was not able to produce what he needed. And then, but that's the perfect use case for Flux, which is the new data scripting language that Influx data developed and introduced as the main language in version two. And so with Flux, he was able to combine data from multiple measurements and summarize this data in a nice table. So yeah, having more flexible and powerful language, also allows you to make better a visualization. >> So Angelo, where would you be without time series database, that technology generally, may be specifically InfluxDB, as one of the leading platforms. Would you be able to do this? >> Yeah, it's hard to imagine, doing what we are doing without InfluxDB. And I don't know, perhaps it would be just a matter of time to rediscover InfluxDB. >> Yeah. How about you Caleb? >> Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company, we weren't using InfluxDB and we were dealing with serious issues of the database growing to a an incredible size, extremely quickly. And being unable to, like even querying short periods of data, was taking on the order of seconds, which is just not possible for operations. So time series database is, if you're dealing with large volumes of time series data, Time series database is the right tool for the job and Influx is a great one for it. So, yeah, it's absolutely required to use for this kind of data, there is not really any other option. >> Guys, this has been really informative. It's pretty exciting to see, how the edge is mountain tops, lower Earth orbits. Space is the ultimate edge. Isn't it. I wonder if you could two questions to wrap here. What comes next for you guys? And is there something that you're really excited about? That you're working on. Caleb, may be you could go first and than Angelo you could bring us home. >> Yeah absolutely, So basically, what's next for Loft Orbital is more, more satellites a greater push towards infrastructure and really making, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, making that happen. It's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole. Because there are so many interesting applications out there. So many cool ways of leveraging space that people are taking advantage of and with companies like SpaceX, now rapidly lowering cost of launch. It's just a really exciting place to be in. And we're launching more satellites. We're scaling up for some constellations and our ground system has to be improved to match. So there is a lot of improvements that we are working on to really scale up our control systems to be best in class and make it capable of handling such large workloads. So, yeah. What's next for us is just really 10X ing what we are doing. And that's extremely exciting. >> And anything else you are excited about? Maybe something personal? Maybe, you know, the titbit you want to share. Are you guys hiring? >> We're absolutely hiring. So, we've positions all over the company. So we need software engineers. We need people who do more aerospace specific stuff. So absolutely, I'd encourage anyone to check out the Loft Orbital website, if this is at all interesting. Personal wise, I don't have any interesting personal things that are data related. But my current hobby is sea kayaking, so I'm working on becoming a sea kayaking instructor. So if anyone likes to go sea kayaking out in the San Francisco Bay area, hopefully I'll see you out there. >> Love it. All right, Angelo, bring us home. >> Yeah. So what's next for us is, we're getting this telescope working and collecting data and when that's happened, it's going to be just a delish of data coming out of this camera. And handling all that data, is going to be a really challenging. Yeah, I wonder I might not be here for that I'm looking for it, like for next year we have an important milestone, which is our commissioning camera, which is a simplified version of the full camera, is going to be on sky and so most of the system has to be working by then. >> Any cool hobbies that you are working on or any side project? >> Yeah, actually, during the pandemic I started gardening. And I live here in Two Sun, Arizona. It gets really challenging during the summer because of the lack of water, right. And so, we have an automatic irrigation system at the farm and I'm trying to develop a small system to monitor the irrigation and make sure that our plants have enough water to survive. >> Nice. All right guys, with that we're going to end it. Thank you so much. Really fascinating and thanks to InfluxDB for making this possible. Really ground breaking stuff, enabling value at the edge, in the cloud and of course beyond, at the space. Really transformational work, that you guys are doing. So congratulations and I really appreciate the broader community. I can't wait to see what comes next from this entire eco system. Now in the moment, I'll be back to wrap up. This is Dave Vallante. And you are watching The cube, the leader in high tech enterprise coverage. (upbeat music)

Published Date : Apr 21 2022

SUMMARY :

and what you guys do of the kind of customer that we can serve. Caleb, what you guys do. So I started in the Air Force, code away on the software. so that the scientists and the public for the better part of the Dark Energy survey And you both use InfluxDB and it's kind of the super in the example that Caleb just gave, the goal is to look at the of the next gen telescopes to come online. the telescope needs to be that the system needs to keep up And it's not just the database, right. Okay, Caleb, let's bring you back in. the bus is, what you can kind of think of So talk more about how you use InfluxDB And that has, you know, does that mean to you? digging into the data to like an instant, means to you and your team? the images that we collect, I mean, you think about these that produce the high volume For example, micro seconds. that's one of the reasons we chose it. that's absolutely one of the that are closer to the interesting math. Angelo is it the same situation for you? And he had, that was really a challenge as one of the leading platforms. Yeah, it's hard to imagine, How about you Caleb? of the database growing Space is the ultimate edge. and to be in this industry as a whole. And anything else So if anyone likes to go sea kayaking All right, Angelo, bring us home. and so most of the system because of the lack of water, right. in the cloud and of course

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AngelaPERSON

0.99+

2015DATE

0.99+

Dave VallantePERSON

0.99+

Angelo FaustiPERSON

0.99+

1000 picturesQUANTITY

0.99+

Loft OrbitalORGANIZATION

0.99+

Caleb MaclachlanPERSON

0.99+

40 moonsQUANTITY

0.99+

500 hertzQUANTITY

0.99+

30 daysQUANTITY

0.99+

ChileLOCATION

0.99+

SpaceXORGANIZATION

0.99+

CalebPERSON

0.99+

2018DATE

0.99+

AntarcticaLOCATION

0.99+

10 yearsQUANTITY

0.99+

15 terabytesQUANTITY

0.99+

San Francisco BayLOCATION

0.99+

EarthLOCATION

0.99+

North PoleLOCATION

0.99+

AngeloPERSON

0.99+

PythonTITLE

0.99+

Vera C. RubinPERSON

0.99+

InfluxTITLE

0.99+

10 minutesQUANTITY

0.99+

3.2 gigapixelQUANTITY

0.99+

InfluxDBTITLE

0.99+

300 tonsQUANTITY

0.99+

two questionsQUANTITY

0.99+

bothQUANTITY

0.99+

Rubin ObservatoryLOCATION

0.99+

last weekDATE

0.99+

each imageQUANTITY

0.99+

1.5 kilohertzQUANTITY

0.99+

first projectQUANTITY

0.99+

eight meterQUANTITY

0.99+

todayDATE

0.99+

next yearDATE

0.99+

Vera C Rubin ObservatoryORGANIZATION

0.99+

AWSORGANIZATION

0.99+

USLOCATION

0.99+

one thingQUANTITY

0.98+

an hourQUANTITY

0.98+

firstQUANTITY

0.98+

first generationQUANTITY

0.98+

oneQUANTITY

0.98+

three ordersQUANTITY

0.98+

one exampleQUANTITY

0.97+

Two Sun, ArizonaLOCATION

0.97+

InfluxQLTITLE

0.97+

hundreds of actuatorsQUANTITY

0.97+

each measurementQUANTITY

0.97+

about 300 poundsQUANTITY

0.97+

Shayn Hawthorne, AWS | AWS re:Invent 2018


 

>> Live, from Las Vegas, it's theCUBE covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hey, welcome back everyone. Live, Cube here in Las Vegas for AWS re:Invent. I'm John Furrier with my co-host, Dave Vellante. Day three of wall to wall coverage, holding our voices together, excited for our next guest, Shayn Hawthorne, general manager at AWS, for the exciting project around the Ground Station, partnership with Lockheed Martin. Really kind of outside the box, announced on Tuesday, not at the keynote, but this is a forward thinking real project which satellites can be provisioned like cloud computing resources. Totally innovative, and will change the nature of edge computing, feeding connectivity to anything. So, thanks for joining us. >> Thank you guys for having me. You're right, my voice is going out this week too. We've been doing a lot of talking. (John laughs) >> Great service. This is really compelling, 'cause it changes the nature of the network. You can feed connectivity, 'cause power and connectivity drive everything. Power, you got battery. Connectivity, you got satellite. Totally obvious, now that you look at it, but, not before this. Where did it come from? How did it all start? >> You know, it came from listening to our customers. Our customers have been talking with us and they had a number of challenges in getting the data off of their satellites and down to the ground. So, we listened to these customers and we listened to the challenges they were experiencing in getting their data to the ground, having access to ground stations, having the ability at the network level, to move the data around the world quickly to where they wanted to process it. And then also, having complex business process logic and other things that were required to help them run their satellite downlinks and uplinks. And then finally, the ability to actually have AWS services right there where the data came down into the cloud, so that you could do great things with that data within milliseconds of it hitting the ground. >> So it's a essentially satellite as a service with a back end data capability, data ingestion, analytics, and management capability. That, how'd that idea come about? I mean, it just underscores the scale of AWS. And I'm thinking about other things that you might be able to, where'd the idea come from? How was it germinated? >> Well and actually, let me just say one thing, we actually would call it Ground Station as the service. It's the Ground Station on the surface of the earth that communicates with the satellite. It allows us to get the data off the satellite or send commands up to it. And so, like I was saying, we came up the idea by talking to our customers, and so we went into, I think this is an incredible part of working at Amazon, because we actually follow through with our leadership principals. We worked backwards from the customer. We actually put together a press release and a frequently asked questions document, a PR/FAQ, in a traditional six page format. And we started working it through our leadership and it got all the way to the point that Andy and the senior leadership team within AWS made the decision that they were going to support our idea and the concept and the architecture that we had come up with to meet these customers' requirements, we actually were able to get to that by about March of 2018. By the end of March, Andy had even had us go in and talk with Jeff. He gave us the thumbs up as well, and after six months, we've already procured 24 antennas. We've already built two Ground Stations in the United States and we've downlinked over hundreds of contacts with satellites, bringing Earth imagery down and other test data to prove that this system works. Get it ready for preview. >> It's unbelievable, because you're basically taking the principals of AWS, which is eliminating the heavy lifting, applying that to building Ground Stations, presumably, right, so, the infrastructure that you're building out, do you have partners that you're working with, are there critical players there, that are enabling this? >> Yeah, it's really neat. We've actually had some really great partnerships, both with helping us build AWS Ground Station, as well as partners that helped us learn what the customers need. Let me tell you, first off, about the partnership that we've had with Lockheed Martin to develop a new innovative antenna system that will collaboratively come together with the parabolic reflectors that AWS Ground Station uses. They've been working on this really neat idea that gives them ability to downlink data all over the entire United States in a very resilient way, which means if some of their Ground Stations antennas in Verge don't work, due to man made reasons or due to natural occurrences, then we're actually able to use the rest of the network to still continue to downlink data. And then, we complimentary bring in AWS Astra for certain types of downlinks and then also to provide uplink commanding to other satellites. The other customer partnership that we've worked with was working with the actual customers who are going to use AWS Ground Station, like DigitalGlobe, Black Sky, Capella SAR, HawkEye 360, who all provided valuable inputs to us about exactly what do they need in a Ground Station. They need the ability to rapidly downlink data, they need the ability to pay by the minute so that there are actually able to use variable expense to pay for satellite downlinks instead of capital expenses to go out and build it. And then by doing that, we're able to offer them a product that's 80% cheaper than if they'd had to go out and build a complete network similar to what we built. And, they're able to, like I said before, access great AWS services like Rekognition, or SageMaker, so that they can make sense of the data that they bring down to the Earth. >> It's a big idea and I'm just sort of curious as to, how and if you, sort of, validated it. How'd ya increase the probability that it was actually going to, you know, deliver a business return? Can you talk about that process? >> Well, we were really focused on validating that we could meet customer challenges and really give them the data securely and reliably with great redundancy. So we validated, first off by, we built our antennas and the Ground Stations in the previous software. We finished over a month and a half ago, and we've been rigorously testing it with our customer partners and then letting them validate that the information we've provided back to them was 100% as good as what they would've received on their own network, and we tested it out, and we've actually got a number of pictures and images downloaded over at our kiosk that were all brought in on AWS Ground Station, and its a superb products over there. >> So Shayn, how does it work? You write this press release, this working backwards document, describe that process. Was that process new to you? Had you done it at other companies? How did you find it? Was it a useful process, obviously it was, 'cause you got the outcome you're looking for, but, talk a little bit more about that approach. >> Yeah, it's actually very cool, I've only been at AWS for a year and a half. And so, I would say that my experience at AWS so far completely validates working backwards from customers. We were turned on to the idea by talking to our customers and the challenges they said. I started doing analysis after the job was assigned to me by Dave Nolton, my boss, and I started putting together the first draft of our PR/FAQ, started engaging with customers immediately. Believe it or not, we went through 28 iterations of the PR/FAQ before we even got to Andy. Everybody in our organization took part in helping to make it better, add in, ask hard questions, ensure that we were really thinking this idea through and that we were obsessing on the customer. And then after we got to Andy, and we got through approving that, it probably went through another 28 iterations before we got to Jeff. And then we went through talking with him. He asked additional hard questions to make sure that we were doing the right for the customer and that we were putting together the right kind of product. And finally we've been iterating it on it ever since until we launched it couple of days ago. >> Sounds like you were iterating, raising the bar, and it resonated with customers. >> Totally. And even as part of getting out of it-- >> That's Amazon's language of love. >> And then your engineering resource, you know, if people are asking you hard questions, you obviously need engineering folks to validate that it's doable. At what point do you get that engineering resource, how does that all work? >> Well, it's neat. In my division, Region Services Division, we actually were supporting it completely from within the division, all the way until we got approval from Andy. And then we actually went in and started hiring very good skills. To show you what kind of incredible people we have at Amazon, we only had to hire about 10% space expertise from outside of the company. We were actually able to bring together 80-90% of the needed skills to build AWS Ground Station from people who've been working at Amazon.com and AWS. And we came together, we really learned quickly, we iterated, failed fast, put things together, changed it. And we were able to deliver the product in time. The whole cloth made from our own expertise. >> So just to summarize, from idea to actual, we're going to do this, how long did that take? >> I'd say that took about three months. From idea to making a decision, three months. From decision to have a preview product that we could launch at re:Invent, six months. >> That's unbelievable. >> It is. >> If you think about something of this scope. >> And it was a joy, I mean it was an incredible to be a part of something like this. It was the best work I've ever done in my life. >> Yeah, space is fun. >> It is. >> Shayn, thanks for coming on theCUBE, sharing your story and insight, we love this. We're going to keep following it. And we're going see you guys at the Public Sector Summits, and all the events you guys are at, so, looking forward to seeing and provisioning some satellite. >> I'm looking forward to showing you what we do next. So thank you for having me. >> Great. We'll get a sneak peak. >> Congratulations. >> This is theCUBE here in Las Vegas, we'll be back with more coverage after this short break. (futuristic music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Amazon Web Services, Intel, of edge computing, feeding connectivity to anything. Thank you guys for having me. Totally obvious, now that you look at it, and we listened to the challenges they were experiencing that you might be able to, where'd the idea come from? that we had come up with and then also to provide that it was actually going to, you know, that the information we've provided back to them Was that process new to you? and that we were obsessing on the customer. and it resonated with customers. And even as part of getting out of it-- to validate that it's doable. of the needed skills to build AWS Ground Station that we could launch at re:Invent, six months. to be a part of something like this. and all the events you guys are at, so, I'm looking forward to showing you what we do next. with more coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Dave NoltonPERSON

0.99+

Dave VellantePERSON

0.99+

Shayn HawthornePERSON

0.99+

ShaynPERSON

0.99+

AndyPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

80%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Amazon.comORGANIZATION

0.99+

24 antennasQUANTITY

0.99+

100%QUANTITY

0.99+

John FurrierPERSON

0.99+

Lockheed MartinORGANIZATION

0.99+

JohnPERSON

0.99+

28 iterationsQUANTITY

0.99+

TuesdayDATE

0.99+

EarthLOCATION

0.99+

IntelORGANIZATION

0.99+

Las VegasLOCATION

0.99+

six pageQUANTITY

0.99+

a year and a halfQUANTITY

0.99+

United StatesLOCATION

0.99+

three monthsQUANTITY

0.99+

bothQUANTITY

0.98+

this weekDATE

0.98+

end of MarchDATE

0.98+

80-90%QUANTITY

0.98+

six monthsQUANTITY

0.98+

Ground StationCOMMERCIAL_ITEM

0.97+

Public Sector SummitsEVENT

0.97+

earthLOCATION

0.96+

about 10%QUANTITY

0.95+

Ground StationLOCATION

0.95+

first draftQUANTITY

0.94+

couple of days agoDATE

0.94+

about three monthsQUANTITY

0.94+

firstQUANTITY

0.93+

one thingQUANTITY

0.93+

March of 2018DATE

0.92+

Day threeQUANTITY

0.9+

over a month and a half agoDATE

0.9+

AstraCOMMERCIAL_ITEM

0.9+

StationORGANIZATION

0.87+

HawkEye 360COMMERCIAL_ITEM

0.86+

LiveTITLE

0.85+

Region Services DivisionORGANIZATION

0.82+

CapellaORGANIZATION

0.81+

re:Invent 2018EVENT

0.8+

hundredsQUANTITY

0.8+

SkyCOMMERCIAL_ITEM

0.77+

CubePERSON

0.74+

BlackORGANIZATION

0.74+

two Ground StationsQUANTITY

0.71+

Invent 2018EVENT

0.71+

AWS Ground StationORGANIZATION

0.71+

Ground StationsCOMMERCIAL_ITEM

0.68+

StationsLOCATION

0.65+

re:InventEVENT

0.64+

GroundCOMMERCIAL_ITEM

0.6+

SARCOMMERCIAL_ITEM

0.58+

contactsQUANTITY

0.56+

Jeremy Gardner & Genevieve Roch Decter | Blockchain Week NYC 2018


 

from New York it's the cube covering blockchain week now here's John furry hello everyone welcome back to this special cube exclusive on the water coverage of the awesome cryptocurrency event going on this week blockchain week New York City D central Anthony do re oh seven a big special event launching some great killer products me up to cube alumni that we introduced at polycon 2018 Genevieve Dec Monroe and Jeromy Gartner great to see you guys thanks for having us so you guys look fabulous you look beautiful you're smart we're on a boat we're partying it feels like Prague it feels like prom feels like we are at the top of another bubble couldn't feel better five more boat parties and then the bubbles officially at the top but we're only had the first boat party well the real existential question is what do we view next you know we've we've graduated from nightclubs and strip clubs and now two super yachts like do we go on a spaceship neck's or a Boeing Jets yeah I mean the options are somewhat limited in how we scale up the crypto parties I actually heard today one of my clients is launching in space a crypto mining operation that's fueled by solar power so we might be going to space Elon Musk wants to get involved I agree like where are we going you guys are awesome I love the creative so this party to me is really a testament of the community talk about the community I see polycon was great in Puerto Rico they had restart week and that but I heard these guys saying here at the central that the community's fragmented is the community fragmented seems like it's not out there or just only one pocket of the community I think the community so we have 10,000 people at consensus okay so these are 10,000 people that have gone down the rabbit hole and they're all at the Hilton in midtown Manhattan kind of going like how'd you get involved why are you here 10,000 people is a lot but I think that yeah we're we're at the decentral party so some of the yeast communities are being fragmented but I think we're having like infrastructure built to kind of connect the broader world to the things whether it's custodial services whether it's like tonight the jocks 2.0 wallet and you know everything that's getting involved there I don't know Jeremy Jeremy it's like an international traveler so you Carly Jeremy it's 100 percent in an echo chamber more importantly rabbit holes are like dark and confusing places that there are they're winding and a lot of people are here for very different reasons and thus when you have all these new entrants to the industry to this technology here for all these different reasons of course you have some fragmentation you know in many regards the ideological and philosophical roots of Bitcoin and blotchy technology have been lost son on many of the new entrants and and so it takes time to get to the point where we're all winding I think different blockchains and different applications of this technology will have different kind of approaches to how people think about investors always gonna be pragma because this is a massively growing industry that touches upon every kind of business and governmental and non-governmental it's actually fragmentation is a relative chairman is Genevieve you I saw you and you guys are working with things from cannabis coin I think you had to cannabis cabin this week in New Yorker yeah we're doing that tomorrow night actually so crypto and cannabis are two the hottest millennial sectors right and so we kind of like to say Agri capital we like to dance on the edge of chaos I actually found out about a cannabis company in Vancouver so just outside Vancouver that is using a crypto mining operation and all the excess heat that is coming off that to power a grow-op so we're literally at the intersection of crypto and cannabis not just for our handling money but handling energy in a different way which is so fast that's real mission impact investing right there you know using energy to grow weed that's the Seidel impact isn't it good bad I mean even as you look at it you know better cannabis healthy cannabis is a mission people look care about we're helping people's wallets and we're helping people's minds right in like ways that the government banks and pharmaceutical companies are fighting against so you know if you can't beat them join them so I welcome Astra Zeneca and the Bank of Canada to come on board our mission this is specially turning into a cube after dark episode Jeremy I gotta get your thoughts on these industries because look at cannabis we joke about it but that's an example of another market this zilean markets that are coming online that are gonna be impacted so fragmentation is a relative terms but hey look at it I mean energy tech is infrastructure tech and solid that's what I'm concerned about who nails the infrastructure for network effects and what's the instrumentation for that that's the number one question that is essential question for the protocols whether it's Theory amore Bitcoin oreos Definity so forth the protocol that provides the strongest and and most adaptable and infrastructure and foundational technology is going to be one of the main ones are those will be the main winners and so the names I mentioned they're up there they're very competitive but it's anybody's game right now I think any blockchain can come along right now and be the winner a decade from now and for entrepreneurs represents a challenge because you have to figure out what blocks came to go build on this is why I am big on investing in interoperable Ledger's technologies that enable the kind of transfer smart contracts and crypto assets between blockchains it's a great great segue let's just get an update since we last talked what are you working on what are you investing in what's new in your world share the update on strangers so now my fund is officially launched where how much we launched with just over 15 million dollars and amazingly we launched at the perfect time we're already up 55% and we got making an investment for a venture fund we actually did the exact WA T investment which transferred over from my personal investment portfolio but doing great I have really run the gamut in terms of investments we're making on the equity side of things and in crypto assets but what we're seeing is really accomplished entrepreneurs coming to this space continue actually more optimism than I had felt at polygon poly car and I was like this market needs to correct in a real way today I think that Corrections been prolonged if we were gonna feel a lot of pain it was gonna be two months ago but instead I think it's gonna be one to three years before the market goes through the correction that we need to see for the real shakeout to happen because so many of these teams that I think are garbage have so much money yeah and they're just floating around they got has worked their way out it's just like a bad burrito at some point it's got a pass Genevieve what are you working on I'll see you've got grit capital what's the update on your end what's new yeah amazing actually literally tonight probably about 60 minutes ago my business partner and I signed one of the fastest-growing exchanges in Canada called Einstein exchanges of quiet so these guys have only ever raised like one and a half million u.s. and they're the biggest exchange in Canada by sign ups active accounts so they're probably doing like almost a hundred million in top-line transaction volumes and they're probably never going public somebody's probably gonna buy them but we're gonna be marketing them across the country getting customers I mean the tagline is it doesn't take I'm Stein to open an account it shouldn't take n Stein it by Bitcoin you can literally get this account set up in under 60 seconds so they're vampires ease-of-use surety reducing the steps it takes to do it and get it up and running fast absolutely like my dad could do it and like alright so we say now follow you on Instagram and Facebook which is phenomenal by the way I got a great lifestyle what's the coolest thing you've done since we last talked to Polycom Wow polycon was kind of a high really peaked and then everyone got sick like our team got said polymath untraceable cuz everybody just got the flu yeah we were like on adrenaline and we kept going ah what's the coolest thing that we've done since then I think it's signing up like cool companies like Einstein we also signed a big cannabis company in Colombia called Chiron they're about to go public I don't know Cole what do you think I don't know maybe what's the coolest thing you've done travel what's your good so last night Jeremy and I just met we're together on a blockchain Research Institute project that Sonova Financial is backing and meeting him so you guys working together on a special project right now how's that going what's that about JCO which is a new sort of financial services firm they're creating what it could effectively be understood as a compliant coin offering that is available to more than just accredited investors and that's they're making ico something that falls within the pre-existing regulatory framework and also accessible to your average Joe which I think it's really important if we're going to follow the initial vision for both blockchain technology and offerings all right final question I know you guys want to get back to your dancing and schmoozing networking doing big deals having fun what is blockchain New York we call about we could pop chain we here in New York what the hell's happening there's been a lot of events what's your guy's assessment of you observed and saw anything can you share for the people who didn't make it to New York or not online reading all the action what's happened so as someone that did not attend consensus spoke at three other events or speaking at three other events I can say with certainty that the New York box chain week has been about bringing together virtually everyone in the industry to connect and kind of catch up with one another which is really important we we don't have that many events Miami was too short the industry's gotten too big but having a full week of activities in New York City has enabled me to kind of foster relationships are oh I yeah man get a lot of work John well I've gotten so much work done I haven't had to actually be a date conferences to reconnect with just about everyone that I want to industry that's really special Genevieve what is your observation what have you observed share some in anecdote some insight on what happened this week I know fluid he started I saw Bilt's I was just chatting with him about it it was started in over the weekend it's gone up and we're now into Thursday tomorrow coming up well I don't think it's a coincidence that Goldman Sachs came out today and said that they were launching some sort of digital currency marketing yeah exactly using the power of the 10,000 people i consensus but yeah i know i agree with what jeremy says it's not really about being at consensus it's about what happens like behind closed doors it's all these decentralized parties that are happening yeah open doors but like it's you know like we hosted a core capital asset we had a hundred people in a suite at the dream hotel and it was just like you put the biggest CEOs of the mining companies in the world together and like put those with investors in a room it's like you know 100 people and that's where the deals happen it's not like in the big you know huge auditorium where like nobody looks at each other and everyone's on their phone well I gotta tell you how do we know we the Entrepreneurship side is booming so I totally love the entrepreneurial side check check check access to capital new kinds of business model stuff economics so we reported on all that to me the big story is Wall Street in New York City has been kind of stuck the products kind of like our old is antiquated like the financial products and like that's why Goldman's coming out they got nothing what they don't have anything what are they got so you see in a stagnant they got a traditional product approximately nothing really like new fresh so you got in comes crypto just do a crypto washer so I think I see the New York crowd going this is something that is exciting and we could product ties potentially so I don't think they know yet what that is but I think some of the things that are going on you guys I like I like so I my dad's always the kind of barometer to this whole thing and he's like when are they gonna come out with like a Salesforce stock column for the blockchain right like some sort of application that it doesn't matter if you're like illegal if you're like in investment banking like some sort of pervasive application that just goes wild you have that yet what is that happening Jeremy Jeremy did the date was it's the Netscape moment if you will the moment that blotching technology becomes tangible and now and in retrospect a few years out we may decide that's great for all the young browsers is a browser the original browse for the Internet that was that moment may have already happened we don't really know it maybe it been something like a theory a more augered you know something where there's a use case but people haven't wrapped their heads around it yet but if that hasn't happened yet it's coming it's where we're on the cusp of it because people know what bitcoin is they've heard of the blockchain it is part of the zeitgeist now and and that cultural relevance it's so important for having that Netscape moment Jeremy Jeremy thanks so much to spend the time here on the ground on the water for our special cube coverage of blockchain week new york city consensus you had all kinds of different events you had the crypto house where we were at tons of fluidity conference all this stuff going on good to see you guys you look great thanks for sharing the update here and the cube special coverage I'm John Faria thanks for watching Thanks

Published Date : May 21 2018

SUMMARY :

like in the big you know huge auditorium

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeremyPERSON

0.99+

CanadaLOCATION

0.99+

VancouverLOCATION

0.99+

ColombiaLOCATION

0.99+

Bank of CanadaORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

Sonova FinancialORGANIZATION

0.99+

Puerto RicoLOCATION

0.99+

New YorkLOCATION

0.99+

100 percentQUANTITY

0.99+

Jeromy GartnerPERSON

0.99+

Jeremy GardnerPERSON

0.99+

New YorkLOCATION

0.99+

John FariaPERSON

0.99+

Elon MuskPERSON

0.99+

jeremyPERSON

0.99+

New York CityLOCATION

0.99+

JCOORGANIZATION

0.99+

100 peopleQUANTITY

0.99+

ChironORGANIZATION

0.99+

Astra ZenecaORGANIZATION

0.99+

New YorkerLOCATION

0.99+

New York CityLOCATION

0.99+

Jeremy JeremyPERSON

0.99+

10,000 peopleQUANTITY

0.99+

three other eventsQUANTITY

0.99+

GenevievePERSON

0.99+

Carly JeremyPERSON

0.99+

todayDATE

0.99+

10,000 peopleQUANTITY

0.99+

MiamiLOCATION

0.99+

tomorrow nightDATE

0.98+

two months agoDATE

0.98+

10,000 peopleQUANTITY

0.98+

over 15 million dollarsQUANTITY

0.98+

three other eventsQUANTITY

0.98+

JohnPERSON

0.98+

55%QUANTITY

0.98+

twoQUANTITY

0.98+

EinsteinORGANIZATION

0.98+

JoePERSON

0.98+

Jeremy JeremyPERSON

0.98+

under 60 secondsQUANTITY

0.98+

about 60 minutes agoDATE

0.98+

this weekDATE

0.98+

one and a half millionQUANTITY

0.97+

oneQUANTITY

0.97+

two super yachtsQUANTITY

0.97+

Genevieve Dec MonroePERSON

0.97+

five more boat partiesQUANTITY

0.97+

tonightDATE

0.97+

bothQUANTITY

0.97+

this weekDATE

0.96+

GoldmanORGANIZATION

0.96+

PragueLOCATION

0.95+

Genevieve Roch DecterPERSON

0.95+

blockchain Research InstituteORGANIZATION

0.95+

first boat partyQUANTITY

0.95+

polycon 2018EVENT

0.94+

John furryPERSON

0.94+

NetscapeTITLE

0.94+

PolycomORGANIZATION

0.94+

u.s.LOCATION

0.93+

Blockchain WeekEVENT

0.93+

new yorkLOCATION

0.93+

this weekDATE

0.92+

NYCLOCATION

0.92+

almost a hundred millionQUANTITY

0.91+

Thursday tomorrowDATE

0.91+

New York City D centralLOCATION

0.9+

HiltonLOCATION

0.9+

midtown ManhattanLOCATION

0.89+

ColePERSON

0.89+

one pocketQUANTITY

0.89+

lot of peopleQUANTITY

0.87+

FacebookORGANIZATION

0.86+

lot of eventsQUANTITY

0.78+

InstagramORGANIZATION

0.78+

three yearsQUANTITY

0.75+

last nightDATE

0.74+

2018DATE

0.74+

SalesforceORGANIZATION

0.73+