Image Title

Search Results for one friction:

Drew Nielsen, Teleport | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon, friends. My name is Savannah Peterson here in the Cube Studios live from Detroit, Michigan, where we're at Cuban and Cloud Native Foundation, Cloud Native Con all week. Our last interview of the day served me a real treat and one that I wasn't expecting. It turns out that I am in the presence of two caddies. It's a literal episode of Caddy Shack up here on Cube. John Furrier. I don't think the audience knows that you were a caddy. Tell us about your caddy days. >>I used to caddy when I was a kid at the local country club every weekend. This is amazing. Double loops every weekend. Make some bang, two bags on each shoulder. Caddying for the members where you're going. Now I'm >>On show. Just, just really impressive >>Now. Now I'm caddying for the cube where I caddy all this great content out to the audience. >>He's carrying the story of emerging brands and established companies on their cloud journey. I love it. John, well played. I don't wanna waste any more of this really wonderful individual's time, but since we now have a new trend of talking about everyone's Twitter handle here on the cube, this may be my favorite one of the day, if not Q4 so far. Drew, not reply. AKA Drew ne Drew Nielsen, excuse me, there is here with us from Teleport. Drew, thanks so much for being here. >>Oh, thanks for having me. It's great to be here. >>And so you were a caddy on a whole different level. Can you tell us >>About that? Yeah, so I was in university and I got tired after two years and didn't have a car in LA and met a pro golfer at a golf course and took two years off and traveled around caddying for him and tried to get 'em through Q School. >>This is, this is fantastic. So if you're in school and your parents are telling you to continue going to school, know that you can drop out and be a caddy and still be a very successful television personality. Like both of the gentlemen at some point. >>Well, I never said my parents like >>That decision, but we'll keep our day jobs. Yeah, exactly. And one of them is Cloud Native Security. The hottest topic here at the show. Yep. I want to get into it. You guys are doing some really cool things. Are we? We hear Zero Trust, you know, ransomware and we even, I even talked with the CEO of Dockets morning about container security issues. Sure. There's a lot going on. So you guys are in the middle of teleport. You guys have a unique solution. Tell us what you guys got going on. What do you guys do? What's the solution and what's the problem you solve? >>So Teleport is the first and only identity native infrastructure access solution in the market. So breaking that down, what that really means is identity native being the combination of secret list, getting rid of passwords, Pam Vaults, Key Vaults, Yeah. Passwords written down. Basically the number one source of breach. And 50 to 80% of breaches, depending on whose numbers you want to believe are how organizations get hacked. >>But it's not password 1 23 isn't protecting >>Cisco >>Right >>Now. Well, if you think about when you're securing infrastructure and the second component being zero trust, which assumes the network is completely insecure, right? But everything is validated. Resource to resource security is validated, You know, it assumes work from anywhere. It assumes the security comes back to that resource. And we take the combination of those two into identity, native access where we cryptographically ev, validate identity, but more importantly, we make an absolutely frictionless experience. So engineers can access infrastructure from anywhere at any time. >>I'm just flashing on my roommates, checking their little code, changing Bob login, you know, dongle essentially, and how frustrating that always was. I mean, talk about interrupting workflow was something that's obviously necessary, but >>Well, I mean, talk about frustration if I'm an engineer. Yeah, absolutely. You know, back in the day when you had these three tier monolithic applications, it was kind of simple. But now as you've got modern application development environments Yeah, multi-cloud, hybrid cloud, whatever marketing term around how you talk about this, expanding sort of disparate infrastructure. Engineers are sitting there going from system to system to machine to database to application. I mean, not even a conversation on Kubernetes yet. Yeah. And it's just, you know, every time you pull an engineer or a developer to go to a vault to pull something out, you're pulling them out for 10 minutes. Now, applications today have hundreds of systems, hundreds of microservices. I mean 30 of these a day and nine minutes, 270 minutes times 60. And they also >>Do the math. Well, there's not only that, there's also the breach from manual error. I forgot to change the password. What is that password? I left it open, I left it on >>Cognitive load. >>I mean, it's the manual piece. But even think about it, TR security has to be transparent and engineers are really smart people. And I've talked to a number of organizations who are like, yeah, we've tried to implement security solutions and they fail. Why? They're too disruptive. They're not transparent. And engineers will work their way around them. They'll write it down, they'll do a workaround, they'll backdoor it something. >>All right. So talk about how it works. But I, I mean, I'm getting the big picture here. I love this. Breaking down the silos, making engineers lives easier, more productive. Clearly the theme, everyone they want, they be gonna need. Whoever does that will win it all. How's it work? I mean, you deploying something, is it code, is it in line? It's, >>It's two binaries that you download and really it starts with the core being the identity native access proxy. Okay. So that proxy, I mean, if you look at like the zero trust principles, it all starts with a proxy. Everything connects into that proxy where all the access is gated, it's validated. And you know, from there we have an authorization engine. So we will be the single source of truth for all access across your entire infrastructure. So we bring machines, engineers, databases, applications, Kubernetes, Linux, Windows, we don't care. And we basically take that into a single architecture and single access platform that essentially secures your entire infrastructure. But more importantly, you can do audit. So for all of the organizations that are dealing with FedRAMP, pci, hipaa, we have a complete audit trail down to a YouTube style playback. >>Oh, interesting. We're we're California and ccpa. >>Oh, gdpr. >>Yeah, exactly. It, it, it's, it's a whole shebang. So I, I love, and John, maybe you've heard this term a lot more than I have, but identity native is relatively new to me as as a term. And I suspect you have a very distinct way of defining identity. How do you guys define identity internally? >>So identity is something that is cryptographically validated. It is something you have. So it's not enough. If you look at, you know, credentials today, everyone's like, Oh, I log into my computer, but that's my identity. No, it's not. Right. Those are attributes. Those are something that is secret for a period of time until you write it down. But I can't change my fingerprint. Right. And now I >>Was just >>Thinking of, well no, perfect case in point with touch ID on your meth there. Yeah. It's like when we deliver that cryptographically validated identity, we use these secure modules in like modern laptops or servers. Yeah. To store that identity so that even if you're sitting in front of your computer, you can't get to it. But more importantly, if somebody were to take that and try to be you and try to log in with your fingerprint, it's >>Not, I'm not gonna lie, I love the apple finger thing, you know, it's like, you know, space recognition, like it's really awesome. >>It save me a lot of time. I mean, even when you go through customs and they do the face scan now it actually knows who you are, which is pretty wild in the last time you wanna provide ones. But it just shifted over like maybe three months ago. Well, >>As long as no one chops your finger off like they do in the James Bond movies. >>I mean, we try and keep it a light and fluffy here on the queue, but you know, do a finger teams, we can talk about that >>Too. >>Gabby, I was thinking more minority report, >>But you >>Knows that's exactly what I, what I think of >>Hit that one outta bounds. So I gotta ask, because you said you're targeting engineers, not IT departments. What's, is that, because I in your mind it is now the engineers or what's the, is always the solution more >>Targeted? Well, if you really look at who's dealing with infrastructure on a day-to-day basis, those are DevOps individuals. Those are infrastructure teams, Those are site reliability engineering. And when it, they're the ones who are not only managing the infrastructure, but they're also dealing with the code on it and everything else. And for us, that is who is our primary customer and that's who's doing >>It. What's the biggest problem that you're solving in this use case? Because you guys are nailing it. What's the problem that your identity native solution solves? >>You know, right out of the backs we remove the number one source of breach. And that is taking passwords, secrets and, and keys off the board. That deals with most of the problem right there. But there are really two problems that organizations face. One is scaling. So as you scale, you get more secrets, you get more keys, you get all these things that is all increasing your attack vector in real time. Oh >>Yeah. Across teams locations. I can't even >>Take your pick. Yeah, it's across clouds, right? Any of it >>On-prem doesn't. >>Yeah. Any of it. We, and we allow you to scale, but do it securely and the security is transparent and your engineers will absolutely love it. What's the most important thing about this product Engineers. Absolutely. >>What are they saying? What are some of those examples? Anecdotally, pull boats out from engineering. >>You're too, we should have invent, we should have invented this ourselves. Or you know, we have run into a lot of customers who have tried to home brew this and they're like, you know, we spend an in nor not of hours on it >>And IT or they got legacy from like Microsoft or other solutions. >>Sure, yeah. Any, but a lot of 'em is just like, I wish I had done it myself. Or you know, this is what security should be. >>It makes so much sense and it gives that the team such a peace of mind. I mean, you never know when a breach is gonna come, especially >>It's peace of mind. But I think for engineers, a lot of times it deals with the security problem. Yeah. Takes it off the table so they can do their jobs. Yeah. With zero friction. Yeah. And you know, it's all about speed. It's all about velocity. You know, go fast, go fast, go fast. And that's what we enable >>Some of the benefits to them is they get to save time, focus more on, on task that they need to work on. >>Exactly. >>And get the >>Job done. And on top of it, they answer the audit and compliance mail every time it comes. >>Yeah. Why are people huge? Honestly, why are people doing this? Because, I mean, identity is just such an hard nut to crack. Everyone's got their silos, Vendors having clouds have 'em. Identity is the most fragmented thing on >>The planet. And it has been fragmented ever since my first RSA conference. >>I know. So will we ever get this do over? Is there a driver? Is there a market force? Is this the time? >>I think the move to modern applications and to multi-cloud is driving this because as those application stacks get more verticalized, you just, you cannot deal with the productivity >>Here. And of course the next big thing is super cloud and that's coming fast. Savannah, you know, You know that's Rocket. >>John is gonna be the thought leader and keyword leader of the word super cloud. >>Super Cloud is enabling super services as the cloud cast. Brian Gracely pointed out on his Sunday podcast of which if that happens, Super Cloud will enable super apps in a new architectural >>List. Please don't, and it'll be super, just don't. >>Okay. Right. So what are you guys up to next? What's the big hot spot for the company? What are you guys doing? What are you guys, What's the idea guys hiring? You put the plug in. >>You know, right now we are focused on delivering the best identity, native access platform that we can. And we will continue to support our customers that want to use Kubernetes, that want to use any different type of infrastructure. Whether that's Linux, Windows applications or databases. Wherever they are. >>Are, are your customers all of a similar DNA or are you >>No, they're all over the map. They range everything from tech companies to financial services to, you know, fractional property. >>You seem like someone everyone would need. >>Absolutely. >>And I'm not just saying that to be a really clean endorsement from the Cube, but >>If you were doing DevOps Yeah. And any type of forward-leaning shift, left engineering, you need us because we are basically making security as code a reality across your entire infrastructure. >>Love this. What about the team dna? Are you in a scale growth stage right now? What's going on? Absolutely. Sounds I was gonna say, but I feel like you would have >>To be. Yeah, we're doing, we're, we have a very positive outlook and you know, even though the economic time is what it is, we're doing very well meeting. >>How's the location? Where's the location of the headquarters now? With remote work is pretty much virtual. >>Probably. We're based in downtown Oakland, California. >>Woohoo. Bay area representing on this stage right now. >>Nice. Yeah, we have a beautiful office right in downtown Oakland and yeah, it's been great. Awesome. >>Love that. And are you hiring right now? I bet people might be. I feel like some of our cube watchers are here waiting to figure out their next big play. So love to hear that. Absolutely love to hear that. Besides Drew, not reply, if people want to join your team or say hello to you and tell you how brilliant you looked up here, or ask about your caddy days and maybe venture a guest to who that golfer may have been that you were CAD Inc. For, what are the best ways for them to get in touch with you? >>You can find me on LinkedIn. >>Great. Fantastic. John, anything else >>From you? Yeah, I mean, I just think security is paramount. This is just another example of where the innovation has to kind of break through without good identity, everything could cripple. Then you start getting into the silos and you can start getting into, you know, tracking it. You got error user errors, you got, you know, one of the biggest security risks. People just leave systems open, they don't even know it's there. So like, I mean this is just, just identity is the critical linchpin to, to solve for in security to me. And that's totally >>Agree. We even have a lot of customers who use us just to access basic cloud consoles. Yeah. >>So I was actually just gonna drive there a little bit because I think that, I'm curious, it feels like a solution for obviously complex systems and stacks, but given the utility and what sounds like an extreme ease of use, I would imagine people use this for day-to-day stuff within their, >>We have customers who use it to access their AWS consoles. We have customers who use it to access Grafana dashboards. You know, for, since we're sitting here at coupon accessing a Lens Rancher, all of the amazing DevOps tools that are out there. >>Well, I mean true. I mean, you think about all the reasons why people don't adopt this new federated approach or is because the IT guys did it and the world we're moving into, the developers are in charge. And so we're seeing the trend where developers are taking the DevOps and the data and the security teams are now starting to reset the guardrails. What's your >>Reaction to that? Well, you know, I would say that >>Over the top, >>Well I would say that you know, your DevOps teams and your infrastructure teams and your engineers, they are the new king makers. Yeah. Straight up. Full stop. >>You heard it first folks. >>And that's >>A headline right >>There. That is a headline. I mean, they are the new king makers and, but they are being forced to do it as securely as possible. And our job is really to make that as easy and as frictionless as possible. >>Awesome. >>And it sounds like you're absolutely nailing it. Drew, thank you so much for being on the show. Thanks for having today. This has been an absolute pleasure, John, as usual a joy. And thank all of you for tuning in to the Cube Live here at CU Con from Detroit, Michigan. We look forward to catching you for day two tomorrow.

Published Date : Oct 27 2022

SUMMARY :

I don't think the audience knows that you were a caddy. the members where you're going. Just, just really impressive He's carrying the story of emerging brands and established companies on It's great to be here. And so you were a caddy on a whole different level. Yeah, so I was in university and I got tired after two years and didn't have to school, know that you can drop out and be a caddy and still be a very successful television personality. What's the solution and what's the problem you solve? And 50 to 80% of breaches, depending on whose numbers you want to believe are how organizations It assumes the security comes back to that resource. you know, dongle essentially, and how frustrating that always was. You know, back in the day when you had these three tier I forgot to change I mean, it's the manual piece. I mean, you deploying something, is it code, is it in line? And you know, from there we have an authorization engine. We're we're California and ccpa. And I suspect you have a very distinct way of that is secret for a period of time until you write it down. try to be you and try to log in with your fingerprint, it's Not, I'm not gonna lie, I love the apple finger thing, you know, it's like, you know, space recognition, I mean, even when you go through customs and they do the face scan now So I gotta ask, because you said you're targeting Well, if you really look at who's dealing with infrastructure on a day-to-day basis, those are DevOps individuals. Because you guys are nailing it. So as you scale, you get more secrets, you get more keys, I can't even Take your pick. We, and we allow you to scale, but do it securely What are they saying? they're like, you know, we spend an in nor not of hours on it Or you know, you never know when a breach is gonna come, especially And you know, it's all about speed. And on top of it, they answer the audit and compliance mail every time it comes. Identity is the most fragmented thing on And it has been fragmented ever since my first RSA conference. I know. Savannah, you know, Super Cloud is enabling super services as the cloud cast. So what are you guys up to next? And we will continue to support our customers that want to use Kubernetes, you know, fractional property. If you were doing DevOps Yeah. Sounds I was gonna say, but I feel like you would have Yeah, we're doing, we're, we have a very positive outlook and you know, How's the location? We're based in downtown Oakland, California. Bay area representing on this stage right now. it's been great. And are you hiring right now? John, anything else Then you start getting into the silos and you can start getting into, you know, tracking it. We even have a lot of customers who use us just to access basic cloud consoles. a Lens Rancher, all of the amazing DevOps tools that are out there. I mean, you think about all the reasons why people don't adopt this Well I would say that you know, your DevOps teams and your infrastructure teams and your engineers, I mean, they are the new king makers and, but they are being forced to We look forward to catching you for day

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Savannah PetersonPERSON

0.99+

30QUANTITY

0.99+

DrewPERSON

0.99+

10 minutesQUANTITY

0.99+

Brian GracelyPERSON

0.99+

JohnPERSON

0.99+

LALOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Drew NielsenPERSON

0.99+

two binariesQUANTITY

0.99+

CiscoORGANIZATION

0.99+

270 minutesQUANTITY

0.99+

50QUANTITY

0.99+

SavannahPERSON

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

two problemsQUANTITY

0.99+

Detroit, MichiganLOCATION

0.99+

oneQUANTITY

0.99+

SundayDATE

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

second componentQUANTITY

0.99+

Zero TrustORGANIZATION

0.99+

TeleportORGANIZATION

0.99+

WindowsTITLE

0.99+

LinkedInORGANIZATION

0.99+

three tierQUANTITY

0.99+

John FurrierPERSON

0.99+

Cloud Native FoundationORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

bothQUANTITY

0.99+

CaliforniaLOCATION

0.99+

tomorrowDATE

0.98+

two bagsQUANTITY

0.98+

LinuxTITLE

0.98+

OneQUANTITY

0.98+

80%QUANTITY

0.98+

three months agoDATE

0.98+

FedRAMPORGANIZATION

0.98+

day twoQUANTITY

0.98+

KubeConEVENT

0.98+

CloudNativeConEVENT

0.97+

Super CloudTITLE

0.97+

GabbyPERSON

0.96+

nine minutesQUANTITY

0.96+

Cube StudiosORGANIZATION

0.95+

a dayQUANTITY

0.95+

CU ConEVENT

0.95+

DoubleQUANTITY

0.94+

TwitterORGANIZATION

0.94+

zero frictionQUANTITY

0.94+

BobPERSON

0.93+

CubeORGANIZATION

0.92+

Caddy ShackTITLE

0.92+

Q SchoolORGANIZATION

0.91+

single access platformQUANTITY

0.91+

zero trustQUANTITY

0.89+

single architectureQUANTITY

0.89+

60QUANTITY

0.88+

downtown Oakland, CaliforniaLOCATION

0.88+

teleportORGANIZATION

0.87+

KubernetesTITLE

0.87+

two caddiesQUANTITY

0.87+

pciORGANIZATION

0.86+

each shoulderQUANTITY

0.85+

CubanORGANIZATION

0.85+

single sourceQUANTITY

0.85+

hundreds of microservicesQUANTITY

0.84+

zero trustQUANTITY

0.83+

DocketsORGANIZATION

0.83+

NA 2022EVENT

0.82+

CAD Inc.ORGANIZATION

0.81+

BayLOCATION

0.8+

one sourceQUANTITY

0.78+

RSA conferenceEVENT

0.78+

hundreds of systemsQUANTITY

0.77+

Cloud NativeEVENT

0.76+

Angelo Fausti & Caleb Maclachlan | The Future is Built on InfluxDB


 

>> Okay. We're now going to go into the customer panel, and we'd like to welcome Angelo Fausti, who's a software engineer at the Vera C. Rubin Observatory, and Caleb Maclachlan who's senior spacecraft operations software engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss folks this interview. Caleb, let's start with you. You work for an extremely cool company, you're launching satellites into space. Of course doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem. >> Yeah, absolutely. And thanks for having me here by the way. So Loft Orbital is a company that's a series B startup now, who, and our mission basically is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have a big software teams, and then eventually worry about, a bunch like, just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access, to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP is getting your programs, your mission deployed on orbit with access to different sensors, cameras, radios, stuff like that. So, that's kind of our mission and just to give a really brief example of the kind of customer that we can serve. There's a really cool company called Totum Labs, who is working on building IoT cons, an IoT constellation for, internet of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT which means you have this little modem inside a container that container that you track from anywhere in the world as it's going across the ocean. So, and it's really little, and they've been able to stay a small startup that's focused on their product, which is the, that super crazy, complicated, cool radio, while we handle the whole space segment for them, which just, you know, before Loft was really impossible. So that's our mission is providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving a huge variety of customers with all kinds of different missions, and obviously generating a ton of data in space that we've got to handle. >> Yeah. So amazing Caleb, what you guys do. Now, I know you were lured to the skies very early in your career, but how did you kind of land in this business? >> Yeah, so, I guess just a little bit about me. For some people, they don't necessarily know what they want to do like earlier in their life. For me I was five years old and I knew I want to be in the space industry. So, I started in the Air Force, but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of actually. So, I've kind of started out in satellites, spent some time in working in the launch industry on rockets, then, now I'm here back in satellites and honestly, this is the most exciting of the different space startups that I've been a part of. >> Super interesting. Okay. Angelo, let's talk about the Rubin Observatory. Vera C. Rubin, famous woman scientist, galaxy guru. Now you guys, the Observatory, you're up way up high, you get a good look at the Southern sky. And I know COVID slowed you guys down a bit, but no doubt you continued to code away on the software. I know you're getting close, you got to be super excited, give us the update on the Observatory and your role. >> All right. So, yeah. Rubin is a state of the art observatory that is in construction on a remote mountain in Chile. And, with Rubin we'll conduct the large survey of space and time. We're going to observe the sky with eight meter optical telescope and take 1000 pictures every night with 2.2 Gigapixel camera. And we are going to do that for 10 years, which is the duration of the survey. >> Yeah, amazing project. Now, you earned a doctor of philosophy so you probably spent some time thinking about what's out there, and then you went out to earn a PhD in astronomy and astrophysics. So, this is something that you've been working on for the better part of your career, isn't it? >> Yeah, that's right, about 15 years. I studied physics in college. Then I got a PhD in astronomy. And, I worked for about five years in another project, the Dark Energy Survey before joining Rubin in 2015. >> Yeah, impressive. So it seems like both your organizations are looking at space from two different angles. One thing you guys both have in common of course is software, and you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb you could start. >> Yeah, absolutely. So, the first company that I extensively used InfluxDB in, was a launch startup called Astra. And we were in the process of designing our first generation rocket there, and testing the engines, pumps, everything that goes into a rocket. And, when I joined the company our data story was not very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. And at first, there, you know, that's the way that a lot of engineers and scientists are used to working. And at first that was, like people weren't entirely sure that that was, that needed to change. But, it's, something, the nice thing about InfluxDB is that, it's so easy to deploy. So as, our software engineering team was able to get it deployed and, up and running very quickly and then quickly also backport all of the data that we collected this far into Influx. And, what was amazing to see and is kind of the super cool moment with Influx is, when we hooked that up to Grafana, Grafana as the visualization platform we used with Influx, 'cause it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data, where they could just almost instantly easily discover data that they hadn't been able to see before, and take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming, and, I saw them implementing like crazy rocket equation type stuff in Influx, and it just was totally game changing for how we tested. >> So Angelo, I was explaining in my open, that you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about in the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? >> Yeah, correct. So, I work with the data management team, and my first project was the record metrics that measured the performance of our software, the software that we used to process the data. So I started implementing that in our relational database. But then I realized that in fact I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB, and that was back in 2018. The, another use for InfluxDB that I'm also interested is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, we call a visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long, with about 1000 points every night. It's actually not too much data compared to other problems. It's really just a different time scale. >> The telescope at the Rubin Observatory is like, pun intended, I guess the star of the show. And I believe I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. Like, that's like 40 moons in an image, amazingly fast as well. What else can you tell us about the telescope? >> This telescope it has to move really fast. And, it also has to carry the primary mirror which is an eight meter piece of glass. It's very heavy. And it has to carry a camera which has about the size of a small car. And this whole structure weighs about 300 tons. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about it's design is that, the telescope, this 300 tons structure, it sits on a tiny film of oil, which has the diameter of human hair. And that makes an, almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So, each image has, in diameter the size of about seven full moons. And, with that, we can map the entire sky in only three days. And of course, during operations everything's controlled by software and it is automatic. There's a very complex piece of software called the Scheduler, which is responsible for moving the telescope, and the camera, which is recording 15 terabytes of data every night. >> And Angelo, all this data lands in InfluxDB, correct? And what are you doing with all that data? >> Yeah, actually not. So we use InfluxDB to record engineering data and metadata about the observations. Like telemetry, events, and commands from the telescope. That's a much smaller data set compared to the images. But it is still challenging because you have some high frequency data that the system needs to keep up, and, we need to store this data and have it around for the lifetime of the project. >> Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about the, you got these dishwasher size satellites, kind of using a multi-tenant model, I think it's genius. But tell us about the satellites themselves. >> Yeah, absolutely. So, we have in space some satellites already that as you said, are like dishwasher, mini fridge kind of size. And we're working on a bunch more that are a variety of sizes from shoebox to, I guess, a few times larger than what we have today. And it is, we do shoot to have effectively something like a multi-tenant model where we will buy a bus off the shelf. The bus is what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power, it has the solar panels, it has some radios attached to it. It handles the attitude control, basically steers the spacecraft in orbit, and then we build also in-house, what we call our payload hub which is, has all, any customer payloads attached and our own kind of Edge processing sort of capabilities built into it. And, so we integrate that, we launch it, and those things because they're in lower Earth orbit, they're orbiting the earth every 90 minutes. That's, seven kilometers per second which is several times faster than a speeding bullet. So we have one of the unique challenges of operating spacecraft in lower Earth orbit is that generally you can't talk to them all the time. So, we're managing these things through very brief windows of time, where we get to talk to them through our ground sites, either in Antarctica or in the North pole region. >> Talk more about how you use InfluxDB to make sense of this data through all this tech that you're launching into space. >> We basically, previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was so slow and the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. So we migrated to InfluxDB to store our time series telemetry from the spacecraft. So, that's things like power level, voltage, currents, counts, whatever metadata we need to monitor about the spacecraft, we now store that in InfluxDB. And that has, now we can actually easily store the entire volume of data for the mission life so far without having to worry about the size bloating to an unmanageable amount, and we can also seamlessly query large chunks of data. Like if I need to see, you know, for example, as an operator, I might want to see how my battery state of charge is evolving over the course of the year, I can have, plot in an Influx that loads that in a fraction of a second for a year's worth of data because it does, intelligent, it can intelligently group the data by assigning time interval. So, it's been extremely powerful for us to access the data. And, as time has gone on, we've gradually migrated more and more of our operating data into Influx. >> Yeah. Let's talk a little bit about, we throw this term around a lot of, you know, data driven, a lot of companies say, "Oh yes, we're data driven." But you guys really are, I mean, you got data at the core. Caleb, what does that mean to you? >> Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astra where our engineer's feedback loop went from a lot of kind of slow researching, digging into the data to like an instant, instantaneous almost, seeing the data, making decisions based on it immediately rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. But to give another practical example, as I said, we have a huge amount of data that comes down every orbit and we need to be able to ingest all of that data almost instantaneously and provide it to the operator in near real time, about a second worth of latency is all that's acceptable for us to react to see what is coming down from the spacecraft. And building that pipeline is challenging from a software engineering standpoint. My primary language is Python which isn't necessarily that fast. So what we've done is started, and the goal of being data-driven is publish metrics on individual, how individual pieces of our data processing pipeline are performing into Influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow. And what that has done is allow us to make intelligent decisions on our software development roadmap where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency, just because we have that visibility into where the real problems are. And sometimes we've found ourselves before we started doing this, kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now that we're being a bit more data driven there, we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale from supporting a couple of satellites to supporting many, many satellites at once. >> Yeah, of course is how you reduced those dead ends. Maybe Angelo you could talk about what sort of data-driven means to you and your teams. >> I would say that, having real time visibility to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect with the telescope have good quality, and, that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible and then start fixing problems. >> Caleb, what are your sort of event, you know, intervals like? >> So I would say that, as of today on the spacecraft, the event, the level of timing that we deal with probably tops out at about 20 Hertz, 20 measurements per second on things like our gyroscopes. But, the, I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications and I'll give an example from when I worked at, on the rockets at Astra. There, our baseline data rate that we would ingest data during a test is 500 Hertz. So 500 samples per second, and in some cases we would actually need to ingest much higher rate data, even up to like 1.5 kilohertz, so extremely, extremely high precision data there where timing really matters a lot. And, you know, I can, one of the really powerful things about Influx is the fact that it can handle this. That's one of the reasons we chose it, because, there's, times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job we often zoom out to look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second, and you need to see same thing as Angelo just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers, so that can be something like, "Hey, I opened this valve at exactly this time," and that goes, we want to have that at, micro, or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at this exact moment, was that before or after this valve opened? That kind of visibility is critical in these kind of scientific applications, and absolutely game changing to be able to see that in near real time, and with, a really easy way for engineers to be able to visualize this data themselves without having to wait for us software engineers to go build it for them. >> Can the scientists do self-serve or do you have to design and build all the analytics and queries for your scientists? >> Well, I think that's absolutely, from my perspective that's absolutely one of the best things about Influx and what I've seen be game changing is that, generally I'd say anyone can learn to use Influx. And honestly, most of our users might not even know they're using Influx, because, the interface that we expose to them is Grafana, which is a generic graphing, open source graphing library that is very similar to Influx zone Chronograf. >> Sure. >> And what it does is, it provides this almost, it's a very intuitive UI for building your queries. So, you choose a measurement and it shows a dropdown of available measurements. And then you choose the particular fields you want to look at, and again, that's a dropdown. So, it's really easy for our users to discover and there's kind of point and click options for doing math, aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality that Influx provides. >> Putting data in the hands of those who have the context, the domain experts is key. Angelo, is it the same situation for you, is it self-serve? >> Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards because they know what exactly what they need to visualize. >> Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company we weren't using InfluxDB and we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations. >> Guys, this has been really formative, it's pretty exciting to see how the edge, is mountaintops, lower Earth orbits, I mean space is the ultimate edge, isn't it? I wonder if you could answer two questions to wrap here. You know, what comes next for you guys? And is there something that you're really excited about that you're working on? Caleb maybe you could go first and then Angelo you can bring us home. >> Basically what's next for Loft Orbital is more satellites, a greater push towards infrastructure, and really making, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, making that happen. It's extremely exciting, an extremely exciting time to be in this company and to be in this industry as a whole. Because there are so many interesting applications out there, so many cool ways of leveraging space that people are taking advantage of, and with companies like SpaceX and the, now rapidly lowering cost of launch it's just a really exciting place to be in. We're launching more satellites, we are scaling up for some constellations, and our ground system has to be improved to match. So, there's a lot of improvements that we're working on to really scale up our control software to be best in class and make it capable of handling such a large workload, so. >> Are you guys hiring? >> We are absolutely hiring, so I would, we have positions all over the company, so, we need software engineers, we need people who do more aerospace specific stuff. So absolutely, I'd encourage anyone to check out the Loft Orbital website, if this is at all interesting. >> All right, Angelo, bring us home. >> Yeah. So what's next for us is really getting this telescope working and collecting data. And when that's happened is going to be just a deluge of data coming out of this camera and handling all that data is going to be really challenging. Yeah, I want to be here for that, I'm looking forward. Like for next year we have like an important milestone, which is our commissioning camera, which is a simplified version of the full camera, it's going to be on sky, and so yeah, most of the system has to be working by then. >> Nice. All right guys, with that we're going to end it. Thank you so much, really fascinating, and thanks to InfluxDB for making this possible, really groundbreaking stuff, enabling value creation at the Edge, in the cloud, and of course, beyond at the space. So, really transformational work that you guys are doing, so congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave Vellante, and you're watching theCUBE, the leader in high tech enterprise coverage. >> Welcome. Telegraf is a popular open source data collection agent. Telegraf collects data from hundreds of systems like IoT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists, to large corporate teams. The Telegraf project has a very welcoming and active Open Source community. Learn how to get involved by visiting the Telegraf GitHub page. Whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraf. We'd love to hear what you're building. >> Thanks for watching Moving the World with InfluxDB, made possible by Influx Data. I hope you learned some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you want to scale cost effectively with the highest performance, and you're analyzing metrics and data over time, times series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link in the resources below. Remember, all these recordings are going to be available on demand of thecube.net and influxdata.com, so check those out. And poke around Influx Data. They are the folks behind InfluxDB, and one of the leaders in the space. We hope you enjoyed the program, this is Dave Vellante for theCUBE, we'll see you soon. (upbeat music)

Published Date : May 18 2022

SUMMARY :

and what you guys do of the kind of customer that we can serve. So amazing Caleb, what you guys do. of the different space startups the Rubin Observatory. Rubin is a state of the art observatory and then you went out to the Dark Energy Survey and you both use InfluxDB and is kind of the super in the example that Caleb just gave, the software that we that it's going to be the first and the camera, that the system needs to keep up, let's bring you back in. is that generally you can't to make sense of this data all of the data that we were getting. But you guys really are, I digging into the data to like an instant, means to you and your teams. the images that we collect of the ability to have high precision data because, the interface that and functionality that Influx provides. Angelo, is it the same situation for you, we have the astronomers and we were dealing with and then Angelo you can bring us home. and to be in this industry as a whole. out the Loft Orbital website, most of the system has and of course, beyond at the space. and hobbyists, to large corporate teams. and one of the leaders in the space.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CalebPERSON

0.99+

Caleb MaclachlanPERSON

0.99+

Dave VellantePERSON

0.99+

Angelo FaustiPERSON

0.99+

Loft OrbitalORGANIZATION

0.99+

ChileLOCATION

0.99+

Totum LabsORGANIZATION

0.99+

2015DATE

0.99+

10 yearsQUANTITY

0.99+

AntarcticaLOCATION

0.99+

1000 picturesQUANTITY

0.99+

SpaceXORGANIZATION

0.99+

2018DATE

0.99+

15 terabytesQUANTITY

0.99+

40 moonsQUANTITY

0.99+

Vera C. RubinPERSON

0.99+

InfluxTITLE

0.99+

PythonTITLE

0.99+

300 tonsQUANTITY

0.99+

500 HertzQUANTITY

0.99+

AngeloPERSON

0.99+

two questionsQUANTITY

0.99+

earthLOCATION

0.99+

next yearDATE

0.99+

TelegrafORGANIZATION

0.99+

AstraORGANIZATION

0.99+

InfluxDBTITLE

0.99+

todayDATE

0.99+

2.2 GigapixelQUANTITY

0.99+

bothQUANTITY

0.99+

each imageQUANTITY

0.99+

thecube.netOTHER

0.99+

North poleLOCATION

0.99+

first projectQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

EarthLOCATION

0.99+

oneQUANTITY

0.99+

eight meterQUANTITY

0.98+

first generationQUANTITY

0.98+

Vera C. Rubin ObservatoryORGANIZATION

0.98+

three ordersQUANTITY

0.98+

influxdata.comOTHER

0.98+

1.5 kilohertzQUANTITY

0.98+

three daysQUANTITY

0.98+

first companyQUANTITY

0.97+

one thingQUANTITY

0.97+

Moving the WorldTITLE

0.97+

GrafanaTITLE

0.97+

two different anglesQUANTITY

0.97+

about 1000 pointsQUANTITY

0.97+

Rubin ObservatoryLOCATION

0.96+

hundreds of systemsQUANTITY

0.96+

The Future Is Built On InFluxDB


 

>>Time series data is any data that's stamped in time in some way that could be every second, every minute, every five minutes, every hour, every nanosecond, whatever it might be. And typically that data comes from sources in the physical world like devices or sensors, temperature, gauges, batteries, any device really, or things in the virtual world could be software, maybe it's software in the cloud or data and containers or microservices or virtual machines. So all of these items, whether in the physical or virtual world, they're generating a lot of time series data. Now time series data has been around for a long time, and there are many examples in our everyday lives. All you gotta do is punch up any stock, ticker and look at its price over time and graphical form. And that's a simple use case that anyone can relate to and you can build timestamps into a traditional relational database. >>You just add a column to capture time and as well, there are examples of log data being dumped into a data store that can be searched and captured and ingested and visualized. Now, the problem with the latter example that I just gave you is that you gotta hunt and Peck and search and extract what you're looking for. And the problem with the former is that traditional general purpose databases they're designed as sort of a Swiss army knife for any workload. And there are a lot of functions that get in the way and make them inefficient for time series analysis, especially at scale. Like when you think about O T and edge scale, where things are happening super fast, ingestion is coming from many different sources and analysis often needs to be done in real time or near real time. And that's where time series databases come in. >>They're purpose built and can much more efficiently support ingesting metrics at scale, and then comparing data points over time, time series databases can write and read at significantly higher speeds and deal with far more data than traditional database methods. And they're more cost effective instead of throwing processing power at the problem. For example, the underlying architecture and algorithms of time series databases can optimize queries and they can reclaim wasted storage space and reuse it. At scale time, series databases are simply a better fit for the job. Welcome to moving the world with influx DB made possible by influx data. My name is Dave Valante and I'll be your host today. Influx data is the company behind InfluxDB. The open source time series database InfluxDB is designed specifically to handle time series data. As I just explained, we have an exciting program for you today, and we're gonna showcase some really interesting use cases. >>First, we'll kick it off in our Palo Alto studios where my colleague, John furrier will interview Evan Kaplan. Who's the CEO of influx data after John and Evan set the table. John's gonna sit down with Brian Gilmore. He's the director of IOT and emerging tech at influx data. And they're gonna dig into where influx data is gaining traction and why adoption is occurring and, and why it's so robust. And they're gonna have tons of examples and double click into the technology. And then we bring it back here to our east coast studios, where I get to talk to two practitioners, doing amazing things in space with satellites and modern telescopes. These use cases will blow your mind. You don't want to miss it. So thanks for being here today. And with that, let's get started. Take it away. Palo Alto. >>Okay. Today we welcome Evan Kaplan, CEO of influx data, the company behind influx DB. Welcome Evan. Thanks for coming on. >>Hey John, thanks for having me >>Great segment here on the influx DB story. What is the story? Take us through the history. Why time series? What's the story >><laugh> so the history history is actually actually pretty interesting. Um, Paul dicks, my partner in this and our founder, um, super passionate about developers and developer experience. And, um, he had worked on wall street building a number of time series kind of platform trading platforms for trading stocks. And from his point of view, it was always what he would call a yak shave, which means you had to do a ton of work just to start doing work, which means you had to write a bunch of extrinsic routines. You had to write a bunch of application handling on existing relational databases in order to come up with something that was optimized for a trading platform or a time series platform. And he sort of, he just developed this real clear point of view is this is not how developers should work. And so in 2013, he went through why Combinator and he built something for, he made his first commit to open source in flu DB at the end of 2013. And, and he basically, you know, from my point of view, he invented modern time series, which is you start with a purpose-built time series platform to do these kind of workloads. And you get all the benefits of having something right outta the box. So a developer can be totally productive right away. >>And how many people in the company what's the history of employees and stuff? >>Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people now. Um, the company, I joined the company in 2016 and I love Paul's vision. And I just had a strong conviction about the relationship between time series and IOT. Cuz if you think about it, what sensors do is they speak time, series, pressure, temperature, volume, humidity, light, they're measuring they're instrumenting something over time. And so I thought that would be super relevant over long term and I've not regretted it. >>Oh no. And it's interesting at that time, go back in the history, you know, the role of databases, well, relational database is the one database to rule the world. And then as clouds started coming in, you starting to see more databases, proliferate types of databases and time series in particular is interesting. Cuz real time has become super valuable from an application standpoint, O T which speaks time series means something it's like time matters >>Time. >>Yeah. And sometimes data's not worth it after the time, sometimes it worth it. And then you get the data lake. So you have this whole new evolution. Is this the momentum? What's the momentum, I guess the question is what's the momentum behind >>You mean what's causing us to grow. So >>Yeah, the time series, why is time series >>And the >>Category momentum? What's the bottom line? >>Well, think about it. You think about it from a broad, broad sort of frame, which is where, what everybody's trying to do is build increasingly intelligent systems, whether it's a self-driving car or a robotic system that does what you want to do or a self-healing software system, everybody wants to build increasing intelligent systems. And so in order to build these increasing intelligent systems, you have to instrument the system well, and you have to instrument it over time, better and better. And so you need a tool, a fundamental tool to drive that instrumentation. And that's become clear to everybody that that instrumentation is all based on time. And so what happened, what happened, what happened what's gonna happen? And so you get to these applications like predictive maintenance or smarter systems. And increasingly you want to do that stuff, not just intelligently, but fast in real time. So millisecond response so that when you're driving a self-driving car and the system realizes that you're about to do something, essentially you wanna be able to act in something that looks like real time, all systems want to do that, want to be more intelligent and they want to be more real time. And so we just happen to, you know, we happen to show up at the right time in the evolution of a >>Market. It's interesting near real time. Isn't good enough when you need real time. >><laugh> yeah, it's not, it's not. And it's like, and it's like, everybody wants, even when you don't need it, ironically, you want it. It's like having the feature for, you know, you buy a new television, you want that one feature, even though you're not gonna use it, you decide that your buying criteria real time is a buying criteria >>For, so you, I mean, what you're saying then is near real time is getting closer to real time as possible, as fast as possible. Right. Okay. So talk about the aspect of data, cuz we're hearing a lot of conversations on the cube in particular around how people are implementing and actually getting better. So iterating on data, but you have to know when it happened to get, know how to fix it. So this is a big part of how we're seeing with people saying, Hey, you know, I wanna make my machine learning algorithms better after the fact I wanna learn from the data. Um, how does that, how do you see that evolving? Is that one of the use cases of sensors as people bring data in off the network, getting better with the data knowing when it happened? >>Well, for sure. So, so for sure, what you're saying is, is, is none of this is non-linear, it's all incremental. And so if you take something, you know, just as an easy example, if you take a self-driving car, what you're doing is you're instrumenting that car to understand where it can perform in the real world in real time. And if you do that, if you run the loop, which is I instrumented, I watch what happens, oh, that's wrong? Oh, I have to correct for that. I correct for that in the software. If you do that for a billion times, you get a self-driving car, but every system moves along that evolution. And so you get the dynamic of, you know, of constantly instrumenting watching the system behave and do it. And this and sets up driving car is one thing. But even in the human genome, if you look at some of our customers, you know, people like, you know, people doing solar arrays, people doing power walls, like all of these systems are getting smarter. >>Well, let's get into that. What are the top applications? What are you seeing for your, with in, with influx DB, the time series, what's the sweet spot for the application use case and some customers give some >>Examples. Yeah. So it's, it's pretty easy to understand on one side of the equation that's the physical side is sensors are sensors are getting cheap. Obviously we know that and they're getting the whole physical world is getting instrumented, your home, your car, the factory floor, your wrist, watch your healthcare, you name it. It's getting instrumented in the physical world. We're watching the physical world in real time. And so there are three or four sweet spots for us, but, but they're all on that side. They're all about IOT. So they're think about consumer IOT projects like Google's nest todo, um, particle sensors, um, even delivery engines like rapid who deliver the Instacart of south America, like anywhere there's a physical location do and that's on the consumer side. And then another exciting space is the industrial side factories are changing dramatically over time. Increasingly moving away from proprietary equipment to develop or driven systems that run operational because what, what has to get smarter when you're building, when you're building a factory is systems all have to get smarter. And then, um, lastly, a lot in the renewables sustainability. So a lot, you know, Tesla, lucid, motors, Cola, motors, um, you know, lots to do with electric cars, solar arrays, windmills, arrays, just anything that's gonna get instrumented that where that instrumentation becomes part of what the purpose >>Is. It's interesting. The convergence of physical and digital is happening with the data IOT. You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary OT systems. Now becoming more IP enabled internet protocol and now edge compute, getting smaller, faster, cheaper AI going to the edge. Now you have all kinds of new capabilities that bring that real time and time series opportunity. Are you seeing IOT going to a new level? What was the, what's the IOT where's the IOT dots connecting to because you know, as these two cultures merge yeah. Operations, basically industrial factory car, they gotta get smarter, intelligent edge is a buzzword, but I mean, it has to be more intelligent. Where's the, where's the action in all this. So the >>Action, really, it really at the core, it's at the developer, right? Because you're looking at these things, it's very hard to get an off the shelf system to do the kinds of physical and software interaction. So the actions really happen at the developer. And so what you're seeing is a movement in the world that, that maybe you and I grew up in with it or OT moving increasingly that developer driven capability. And so all of these IOT systems they're bespoke, they don't come out of the box. And so the developer, the architect, the CTO, they define what's my business. What am I trying to do? Am I trying to sequence a human genome and figure out when these genes express theself or am I trying to figure out when the next heart rate monitor's gonna show up on my apple watch, right? What am I trying to do? What's the system I need to build. And so starting with the developers where all of the good stuff happens here, which is different than it used to be, right. Used to be you'd buy an application or a service or a SA thing for, but with this dynamic, with this integration of systems, it's all about bespoke. It's all about building >>Something. So let's get to the developer real quick, real highlight point here is the data. I mean, I could see a developer saying, okay, I need to have an application for the edge IOT edge or car. I mean, we're gonna have, I mean, Tesla's got applications of the car it's right there. I mean, yes, there's the modern application life cycle now. So take us through how this impacts the developer. Does it impact their C I C D pipeline? Is it cloud native? I mean, where does this all, where does this go to? >>Well, so first of all, you're talking about, there was an internal journey that we had to go through as a company, which, which I think is fascinating for anybody who's interested is we went from primarily a monolithic software that was open sourced to building a cloud native platform, which means we had to move from an agile development environment to a C I C D environment. So to a degree that you are moving your service, whether it's, you know, Tesla monitoring your car and updating your power walls, right. Or whether it's a solar company updating the arrays, right. To degree that that service is cloud. Then increasingly remove from an agile development to a C I C D environment, which you're shipping code to production every day. And so it's not just the developers, all the infrastructure to support the developers to run that service and that sort of stuff. I think that's also gonna happen in a big way >>When your customer base that you have now, and as you see, evolving with infl DB, is it that they're gonna be writing more of the application or relying more on others? I mean, obviously there's an open source component here. So when you bring in kind of old way, new way old way was I got a proprietary, a platform running all this O T stuff and I gotta write, here's an application. That's general purpose. Yeah. I have some flexibility, somewhat brittle, maybe not a lot of robustness to it, but it does its job >>A good way to think about this is versus a new way >>Is >>What so yeah, good way to think about this is what, what's the role of the developer slash architect CTO that chain within a large, within an enterprise or a company. And so, um, the way to think about it is I started my career in the aerospace industry <laugh> and so when you look at what Boeing does to assemble a plane, they build very, very few of the parts. Instead, what they do is they assemble, they buy the wings, they buy the engines, they assemble, actually, they don't buy the wings. It's the one thing they buy the, the material for the w they build the wings, cuz there's a lot of tech in the wings and they end up being assemblers smart assemblers of what ends up being a flying airplane, which is pretty big deal even now. And so what, what happens with software people is they have the ability to pull from, you know, the best of the open source world. So they would pull a time series capability from us. Then they would assemble that with, with potentially some ETL logic from somebody else, or they'd assemble it with, um, a Kafka interface to be able to stream the data in. And so they become very good integrators and assemblers, but they become masters of that bespoke application. And I think that's where it goes, cuz you're not writing native code for everything. >>So they're more flexible. They have faster time to market cuz they're assembling way faster and they get to still maintain their core competency. Okay. Their wings in this case, >>They become increasingly not just coders, but designers and developers. They become broadly builders is what we like to think of it. People who start and build stuff by the way, this is not different than the people just up the road Google have been doing for years or the tier one, Amazon building all their own. >>Well, I think one of the things that's interesting is is that this idea of a systems developing a system architecture, I mean systems, uh, uh, systems have consequences when you make changes. So when you have now cloud data center on premise and edge working together, how does that work across the system? You can't have a wing that doesn't work with the other wing kind of thing. >>That's exactly. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in for us. We've really been thoughtful about that because IOT it's critical. So our open source edge has the same API as our cloud native stuff that has enterprise on pre edge. So our multiple products have the same API and they have a relationship with each other. They can talk with each other. So the builder builds it once. And so this is where, when you start thinking about the components that people have to use to build these services is that you wanna make sure, at least that base layer, that database layer, that those components talk to each other. >>So I'll have to ask you if I'm the customer. I put my customer hat on. Okay. Hey, I'm dealing with a lot. >>That mean you have a PO for <laugh> >>A big check. I blank check. If you can answer this question only if the tech, if, if you get the question right, I got all this important operation stuff. I got my factory, I got my self-driving cars. This isn't like trivial stuff. This is my business. How should I be thinking about time series? Because now I have to make these architectural decisions, as you mentioned, and it's gonna impact my application development. So huge decision point for your customers. What should I care about the most? So what's in it for me. Why is time series >>Important? Yeah, that's a great question. So chances are, if you've got a business that was, you know, 20 years old or 25 years old, you were already thinking about time series. You probably didn't call it that you built something on a Oracle or you built something on IBM's DB two, right. And you made it work within your system. Right? And so that's what you started building. So it's already out there. There are, you know, there are probably hundreds of millions of time series applications out there today. But as you start to think about this increasing need for real time, and you start to think about increasing intelligence, you think about optimizing those systems over time. I hate the word, but digital transformation. Then you start with time series. It's a foundational base layer for any system that you're gonna build. There's no system I can think of where time series, shouldn't be the foundational base layer. If you just wanna store your data and just leave it there and then maybe look it up every five years. That's fine. That's not time. Series time series is when you're building a smarter, more intelligent, more real time system. And the developers now know that. And so the more they play a role in building these systems, the more obvious it becomes. >>And since I have a PO for you and a big check, yeah. What is, what's the value to me as I, when I implement this, what's the end state, what's it look like when it's up and running? What's the value proposition for me. What's an >>So, so when it's up and running, you're able to handle the queries, the writing of the data, the down sampling of the data, they're transforming it in near real time. So that the other dependencies that a system that gets for adjusting a solar array or trading energy off of a power wall or some sort of human genome, those systems work better. So time series is foundational. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build a really compelling, intelligent system. I think that's what developers and archs are seeing now. >>Bottom line, final word. What's in it for the customer. What's what, what's your, um, what's your statement to the customer? What would you say to someone looking to do something in time series on edge? >>Yeah. So, so it's pretty clear to clear to us that if you're building, if you view yourself as being in the build business of building systems that you want 'em to be increasingly intelligent, self-healing autonomous. You want 'em to operate in real time that you start from time series. But I also wanna say what's in it for us influx what's in it for us is people are doing some amazing stuff. You know, I highlighted some of the energy stuff, some of the human genome, some of the healthcare it's hard not to be proud or feel like, wow. Yeah. Somehow I've been lucky. I've arrived at the right time, in the right place with the right people to be able to deliver on that. That's that's also exciting on our side of the equation. >>Yeah. It's critical infrastructure, critical, critical operations. >>Yeah. >>Yeah. Great stuff, Evan. Thanks for coming on. Appreciate this segment. All right. In a moment, Brian Gilmore director of IOT and emerging technology that influx day will join me. You're watching the cube leader in tech coverage. Thanks for watching >>Time series data from sensors systems and applications is a key source in driving automation and prediction in technologies around the world. But managing the massive amount of timestamp data generated these days is overwhelming, especially at scale. That's why influx data developed influx DB, a time series data platform that collects stores and analyzes data influx DB empowers developers to extract valuable insights and turn them into action by building transformative IOT analytics and cloud native applications, purpose built and optimized to handle the scale and velocity of timestamped data. InfluxDB puts the power in your hands with developer tools that make it easy to get started quickly with less code InfluxDB is more than a database. It's a robust developer platform with integrated tooling. That's written in the languages you love. So you can innovate faster, run in flex DB anywhere you want by choosing the provider and region that best fits your needs across AWS, Microsoft Azure and Google cloud flex DB is fast and automatically scalable. So you can spend time delivering value to customers, not managing clusters, take control of your time series data. So you can focus on the features and functionalities that give your applications a competitive edge. Get started for free with influx DB, visit influx data.com/cloud to learn more. >>Okay. Now we're joined by Brian Gilmore director of IOT and emerging technologies at influx data. Welcome to the show. >>Thank you, John. Great to be here. >>We just spent some time with Evan going through the company and the value proposition, um, with influx DV, what's the momentum, where do you see this coming from? What's the value coming out of this? >>Well, I think it, we're sort of hitting a point where the technology is, is like the adoption of it is becoming mainstream. We're seeing it in all sorts of organizations, everybody from like the most well funded sort of advanced big technology companies to the smaller academics, the startups and the managing of that sort of data that emits from that technology is time series and us being able to give them a, a platform, a tool that's super easy to use, easy to start. And then of course will grow with them is, is been key to us. Sort of, you know, riding along with them is they're successful. >>Evan was mentioning that time series has been on everyone's radar and that's in the OT business for years. Now, you go back since 20 13, 14, even like five years ago that convergence of physical and digital coming together, IP enabled edge. Yeah. Edge has always been kind of hyped up, but why now? Why, why is the edge so hot right now from an adoption standpoint? Is it because it's just evolution, the tech getting better? >>I think it's, it's, it's twofold. I think that, you know, there was, I would think for some people, everybody was so focused on cloud over the last probably 10 years. Mm-hmm <affirmative> that they forgot about the compute that was available at the edge. And I think, you know, those, especially in the OT and on the factory floor who weren't able to take Avan full advantage of cloud through their applications, you know, still needed to be able to leverage that compute at the edge. I think the big thing that we're seeing now, which is interesting is, is that there's like a hybrid nature to all of these applications where there's definitely some data that's generated on the edge. There's definitely done some data that's generated in the cloud. And it's the ability for a developer to sort of like tie those two systems together and work with that data in a very unified uniform way. Um, that's giving them the opportunity to build solutions that, you know, really deliver value to whatever it is they're trying to do, whether it's, you know, the, the out reaches of outer space or whether it's optimizing the factory floor. >>Yeah. I think, I think one of the things you also mentions genome too, dig big data is coming to the real world. And I think I, OT has been kind of like this thing for OT and, and in some use case, but now with the, with the cloud, all companies have an edge strategy now. So yeah, what's the secret sauce because now this is hot, hot product for the whole world and not just industrial, but all businesses. What's the secret sauce. >>Well, I mean, I think part of it is just that the technology is becoming more capable and that's especially on the hardware side, right? I mean, like technology compute is getting smaller and smaller and smaller. And we find that by supporting all the way down to the edge, even to the micro controller layer with our, um, you know, our client libraries and then working hard to make our applications, especially the database as small as possible so that it can be located as close to sort of the point of origin of that data in the edge as possible is, is, is fantastic. Now you can take that. You can run that locally. You can do your local decision making. You can use influx DB as sort of an input to automation control the autonomy that people are trying to drive at the edge. But when you link it up with everything that's in the cloud, that's when you get all of the sort of cloud scale capabilities of parallelized, AI and machine learning and all of that. >>So what's interesting is the open source success has been something that we've talked about a lot in the cube about how people are leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, but you got developers now. Yeah. Kind of together brought that up. How do you see that emerging? How do developers engage? What are some of the things you're seeing that developers are really getting into with InfluxDB >>What's? Yeah. Well, I mean, I think there are the developers who are building companies, right? And these are the startups and the folks that we love to work with who are building new, you know, new services, new products, things like that. And, you know, especially on the consumer side of IOT, there's a lot of that, just those developers. But I think we, you gotta pay attention to those enterprise developers as well, right? There are tons of people with the, the title of engineer in, in your regular enterprise organizations. And they're there for systems integration. They're there for, you know, looking at what they would build versus what they would buy. And a lot of them come from, you know, a strong, open source background and they, they know the communities, they know the top platforms in those spaces and, and, you know, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building a brand new one. >>You know, it's interesting too, when Evan and I were talking about open source versus closed OT systems, mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens of data formats out there? Bunch of standards, protocols, new things are emerging. Everyone wants to have a control plane. Everyone wants to leverage the value of data. How do you guys keep track of it all? What do you guys support? >>Yeah, well, I mean, I think either through direct connection, like we have a product called Telegraph, it's unbelievable. It's open source, it's an edge agent. You can run it as close to the edge as you'd like, it speaks dozens of different protocols in its own, right? A couple of which MQTT B, C U a are very, very, um, applicable to these T use cases. But then we also, because we are sort of not only open source, but open in terms of our ability to collect data, we have a lot of partners who have built really great integrations from their own middleware, into influx DB. These are companies like ke wear and high bite who are really experts in those downstream industrial protocols. I mean, that's a business, not everybody wants to be in. It requires some very specialized, very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, we get the best of both worlds. The customers can use the platforms they need up to the point where they would be putting into our database. >>What's some of customer testimonies that they, that share with you. Can you share some anecdotal kind of like, wow, that's the best thing I've ever used. This really changed my business, or this is a great tech that's helped me in these other areas. What are some of the, um, soundbites you hear from customers when they're successful? >>Yeah. I mean, I think it ranges. You've got customers who are, you know, just finally being able to do the monitoring of assets, you know, sort of at the edge in the field, we have a customer who's who's has these tunnel boring machines that go deep into the earth to like drill tunnels for, for, you know, cars and, and, you know, trains and things like that. You know, they are just excited to be able to stick a database onto those tunnel, boring machines, send them into the depths of the earth and know that when they come out, all of that telemetry at a very high frequency has been like safely stored. And then it can just very quickly and instantly connect up to their, you know, centralized database. So like just having that visibility is brand new to them. And that's super important. On the other hand, we have customers who are way far beyond the monitoring use case, where they're actually using the historical records in the time series database to, um, like I think Evan mentioned like forecast things. So for predictive maintenance, being able to pull in the telemetry from the machines, but then also all of that external enrichment data, the metadata, the temperatures, the pressure is who is operating the machine, those types of things, and being able to easily integrate with platforms like Jupyter notebooks or, you know, all of those scientific computing and machine learning libraries to be able to build the models, train the models, and then they can send that information back down to InfluxDB to apply it and detect those anomalies, which >>Are, I think that's gonna be an, an area. I personally think that's a hot area because I think if you look at AI right now, yeah. It's all about training the machine learning albums after the fact. So time series becomes hugely important. Yeah. Cause now you're thinking, okay, the data matters post time. Yeah. First time. And then it gets updated the new time. Yeah. So it's like constant data cleansing data iteration, data programming. We're starting to see this new use case emerge in the data field. >>Yep. Yeah. I mean, I think you agree. Yeah, of course. Yeah. The, the ability to sort of handle those pipelines of data smartly, um, intelligently, and then to be able to do all of the things you need to do with that data in stream, um, before it hits your sort of central repository. And, and we make that really easy for customers like Telegraph, not only does it have sort of the inputs to connect up to all of those protocols and the ability to capture and connect up to the, to the partner data. But also it has a whole bunch of capabilities around being able to process that data, enrich it, reform at it, route it, do whatever you need. So at that point you're basically able to, you're playing your data in exactly the way you would wanna do it. You're routing it to different, you know, destinations and, and it's, it's, it's not something that really has been in the realm of possibility until this point. Yeah. Yeah. >>And when Evan was on it's great. He was a CEO. So he sees the big picture with customers. He was, he kinda put the package together that said, Hey, we got a system. We got customers, people are wanting to leverage our product. What's your PO they're sell. He's selling too as well. So you have that whole CEO perspective, but he brought up this notion that there's multiple personas involved in kind of the influx DB system architect. You got developers and users. Can you talk about that? Reality as customers start to commercialize and operationalize this from a commercial standpoint, you got a relationship to the cloud. Yep. The edge is there. Yep. The edge is getting super important, but cloud brings a lot of scale to the table. So what is the relationship to the cloud? Can you share your thoughts on edge and its relationship to the cloud? >>Yeah. I mean, I think edge, you know, edges, you can think of it really as like the local information, right? So it's, it's generally like compartmentalized to a point of like, you know, a single asset or a single factory align, whatever. Um, but what people do who wanna pro they wanna be able to make the decisions there at the edge locally, um, quickly minus the latency of sort of taking that large volume of data, shipping it to the cloud and doing something with it there. So we allow them to do exactly that. Then what they can do is they can actually downsample that data or they can, you know, detect like the really important metrics or the anomalies. And then they can ship that to a central database in the cloud where they can do all sorts of really interesting things with it. Like you can get that centralized view of all of your global assets. You can start to compare asset to asset, and then you can do those things like we talked about, whereas you can do predictive types of analytics or, you know, larger scale anomaly detections. >>So in this model you have a lot of commercial operations, industrial equipment. Yep. The physical plant, physical business with virtual data cloud all coming together. What's the future for InfluxDB from a tech standpoint. Cause you got open. Yep. There's an ecosystem there. Yep. You have customers who want operational reliability for sure. I mean, so you got organic <laugh> >>Yeah. Yeah. I mean, I think, you know, again, we got iPhones when everybody's waiting for flying cars. Right. So I don't know. We can like absolutely perfectly predict what's coming, but I think there are some givens and I think those givens are gonna be that the world is only gonna become more hybrid. Right. And then, you know, so we are going to have much more widely distributed, you know, situations where you have data being generated in the cloud, you have data gen being generated at the edge and then there's gonna be data generated sort sort of at all points in between like physical locations as well as things that are, that are very virtual. And I think, you know, we are, we're building some technology right now. That's going to allow, um, the concept of a database to be much more fluid and flexible, sort of more aligned with what a file would be like. >>And so being able to move data to the compute for analysis or move the compute to the data for analysis, those are the types of, of solutions that we'll be bringing to the customers sort of over the next little bit. Um, but I also think we have to start thinking about like what happens when the edge is actually off the planet. Right. I mean, we've got customers, you're gonna talk to two of them, uh, in the panel who are actually working with data that comes from like outside the earth, like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. Yeah. And, and to be able to process data like that and to do so in a way it's it's we gotta, we gotta build the fundamentals for that right now on the factory floor and in the mines and in the tunnels. Um, so that we'll be ready for that one. >>I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, this is kind of new thinking is hyper scale's always been built up full stack developers, even the old OT world, Evan was pointing out that they built everything right. And the world's going to more assembly with core competency and IP and also property being the core of their apple. So faster assembly and building, but also integration. You got all this new stuff happening. Yeah. And that's to separate out the data complexity from the app. Yes. So space genome. Yep. Driving cars throws off massive data. >>It >>Does. So is Tesla, uh, is the car the same as the data layer? >>I mean the, yeah, it's, it's certainly a point of origin. I think the thing that we wanna do is we wanna let the developers work on the world, changing problems, the things that they're trying to solve, whether it's, you know, energy or, you know, any of the other health or, you know, other challenges that these teams are, are building against. And we'll worry about that time series data and the underlying data platform so that they don't have to. Right. I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform quickly, integrate it with their data sources and the other pieces of their applications. It's going to allow them to bring much faster time to market on these products. It's gonna allow them to be more iterative. They're gonna be able to do more sort of testing and things like that. And ultimately it will, it'll accelerate the adoption and the creation of >>Technology. You mentioned earlier in, in our talk about unification of data. Yeah. How about APIs? Cuz developers love APIs in the cloud unifying APIs. How do you view view that? >>Yeah, I mean, we are APIs, that's the product itself. Like everything, people like to think of it as sort of having this nice front end, but the front end is B built on our public APIs. Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, but then data processing, data analytics, and then, you know, sort of data extraction to bring it to other platforms or other applications, microservices, whatever it might be. So, I mean, it is a world of APIs right now and you know, we, we bring a very sort of useful set of them for managing the time series data. These guys are all challenged with. It's >>Interesting. You and I were talking before we came on camera about how, um, data is, feels gonna have this kind of SRE role that DevOps had site reliability engineers, which manages a bunch of servers. There's so much data out there now. Yeah. >>Yeah. It's like reigning data for sure. And I think like that ability to be like one of the best jobs on the planet is gonna be to be able to like, sort of be that data Wrangler to be able to understand like what the data sources are, what the data formats are, how to be able to efficiently move that data from point a to point B and you know, to process it correctly so that the end users of that data aren't doing any of that sort of hard upfront preparation collection storage's >>Work. Yeah. That's data as code. I mean, data engineering is it is becoming a new discipline for sure. And, and the democratization is the benefit. Yeah. To everyone, data science get easier. I mean data science, but they wanna make it easy. Right. <laugh> yeah. They wanna do the analysis, >>Right? Yeah. I mean, I think, you know, it, it's a really good point. I think like we try to give our users as many ways as there could be possible to get data in and get data out. We sort of think about it as meeting them where they are. Right. So like we build, we have the sort of client libraries that allow them to just port to us, you know, directly from the applications and the languages that they're writing, but then they can also pull it out. And at that point nobody's gonna know the users, the end consumers of that data, better than those people who are building those applications. And so they're building these user interfaces, which are making all of that data accessible for, you know, their end users inside their organization. >>Well, Brian, great segment, great insight. Thanks for sharing all, all the complexities and, and IOT that you guys helped take away with the APIs and, and assembly and, and all the system architectures that are changing edge is real cloud is real. Yeah, absolutely. Mainstream enterprises. And you got developer attraction too, so congratulations. >>Yeah. It's >>Great. Well, thank any, any last word you wanna share >>Deal with? No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, download it, try out the open source contribute if you can. That's a, that's a huge thing. It's part of being the open source community. Um, you know, but definitely just, just use it. I think when once people use it, they try it out. They'll understand very, >>Very quickly. So open source with developers, enterprise and edge coming together all together. You're gonna hear more about that in the next segment, too. Right. Thanks for coming on. Okay. Thanks. When we return, Dave LAN will lead a panel on edge and data influx DB. You're watching the cube, the leader in high tech enterprise coverage. >>Why the startup, we move really fast. We find that in flex DB can move as fast as us. It's just a great group, very collaborative, very interested in manufacturing. And we see a bright future in working with influence. My name is Aaron Seley. I'm the CTO at HBI. Highlight's one of the first companies to focus on manufacturing data and apply the concepts of data ops, treat that as an asset to deliver to the it system, to enable applications like overall equipment effectiveness that can help the factory produce better, smarter, faster time series data. And manufacturing's really important. If you take a piece of equipment, you have the temperature pressure at the moment that you can look at to kind of see the state of what's going on. So without that context and understanding you can't do what manufacturers ultimately want to do, which is predict the future. >>Influx DB represents kind of a new way to storm time series data with some more advanced technology and more importantly, more open technologies. The other thing that influx does really well is once the data's influx, it's very easy to get out, right? They have a modern rest API and other ways to access the data. That would be much more difficult to do integrations with classic historians highlight can serve to model data, aggregate data on the shop floor from a multitude of sources, whether that be P C U a servers, manufacturing execution systems, E R P et cetera, and then push that seamlessly into influx to then be able to run calculations. Manufacturing is changing this industrial 4.0, and what we're seeing is influx being part of that equation. Being used to store data off the unified name space, we recommend InfluxDB all the time to customers that are exploring a new way to share data manufacturing called the unified name space who have open questions around how do I share this new data that's coming through my UNS or my QTT broker? How do I store this and be able to query it over time? And we often point to influx as a solution for that is a great brand. It's a great group of people and it's a great technology. >>Okay. We're now going to go into the customer panel and we'd like to welcome Angelo Fasi. Who's a software engineer at the Vera C Ruben observatory in Caleb McLaughlin whose senior spacecraft operations software engineer at loft orbital guys. Thanks for joining us. You don't wanna miss folks this interview, Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. I mean, there, of course doing that is, is highly complex and not a cheap endeavor. Tell us about loft Orbi and what you guys do to attack that problem. >>Yeah, absolutely. And, uh, thanks for having me here by the way. Uh, so loft orbital is a, uh, company. That's a series B startup now, uh, who and our mission basically is to provide, uh, rapid access to space for all kinds of customers. Uh, historically if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, you know, have a big software teams, uh, and then eventually worry about, you know, a bunch like just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as, you know, deploying a VM in, uh, AWS or GCP is getting your, uh, programs, your mission deployed on orbit, uh, with access to, you know, different sensors, uh, cameras, radios, stuff like that. >>So that's, that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. Uh, there's a really cool company called, uh, totem labs who is working on building, uh, IOT cons, an IOT constellation for in of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor T, which means you have this little modem inside a container container that you, that you track from anywhere in the world as it's going across the ocean. Um, so they're, it's really little and they've been able to stay a small startup that's focused on their product, which is the, uh, that super crazy complicated, cool radio while we handle the whole space segment for them, which just, you know, before loft was really impossible. So that's, our mission is, uh, providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers with all kinds of different missions, um, and obviously generating a ton of data in space, uh, that we've gotta handle. Yeah. >>So amazing Caleb, what you guys do, I, now I know you were lured to the skies very early in your career, but how did you kinda land on this business? >>Yeah, so, you know, I've, I guess just a little bit about me for some people, you know, they don't necessarily know what they wanna do like early in their life. For me, I was five years old and I knew, you know, I want to be in the space industry. So, you know, I started in the air force, but have, uh, stayed in the space industry, my whole career and been a part of, uh, this is the fifth space startup that I've been a part of actually. So, you know, I've, I've, uh, kind of started out in satellites, did spent some time in working in, uh, the launch industry on rockets. Then, uh, now I'm here back in satellites and you know, honestly, this is the most exciting of the difference based startups. That I've been a part of >>Super interesting. Okay. Angelo, let's, let's talk about the Ruben observatory, ver C Ruben, famous woman scientist, you know, galaxy guru. Now you guys the observatory, you're up way up high. You're gonna get a good look at the Southern sky. Now I know COVID slowed you guys down a bit, but no doubt. You continued to code away on the software. I know you're getting close. You gotta be super excited. Give us the update on, on the observatory and your role. >>All right. So yeah, Rubin is a state of the art observatory that, uh, is in construction on a remote mountain in Chile. And, um, with Rubin, we conduct the, uh, large survey of space and time we are going to observe the sky with, uh, eight meter optical telescope and take, uh, a thousand pictures every night with a 3.2 gig up peaks of camera. And we are going to do that for 10 years, which is the duration of the survey. >>Yeah. Amazing project. Now you, you were a doctor of philosophy, so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, in astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >>Yeah, that's that's right. Uh, about 15 years, um, I studied physics in college, then I, um, got a PhD in astronomy and, uh, I worked for about five years in another project. Um, the dark energy survey before joining rubing in 2015. >>Yeah. Impressive. So it seems like you both, you know, your organizations are looking at space from two different angles. One thing you guys both have in common of course is, is, is software. And you both use InfluxDB as part of your, your data infrastructure. How did you discover influx DB get into it? How do you use the platform? Maybe Caleb, you could start. >>Uh, yeah, absolutely. So the first company that I extensively used, uh, influx DBN was a launch startup called, uh, Astra. And we were in the process of, uh, designing our, you know, our first generation rocket there and testing the engines, pumps, everything that goes into a rocket. Uh, and when I joined the company, our data story was not, uh, very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. Um, and at first there, you know, that's the way that a lot of engineers and scientists are used to working. Um, and at first that was, uh, like people weren't entirely sure that that was a, um, that that needed to change, but it's something the nice thing about InfluxDB is that, you know, it's so easy to deploy. So as the, our software engineering team was able to get it deployed and, you know, up and running very quickly and then quickly also backport all of the data that we collected thus far into influx and what, uh, was amazing to see. >>And as kind of the, the super cool moment with influx is, um, when we hooked that up to Grafana Grafana as the visualization platform we used with influx, cuz it works really well with it. Uh, there was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data where they could just almost instantly easily discover data that they hadn't been able to see before and take the manual processes that they would run after a test and just throw those all in influx and have live data as tests were coming. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it just was totally game changing for how we tested. >>So Angelo, I was explaining in my open, you know, you could, you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about, and the example of the Caleb just gave you, I mean, you have to have a purpose built time series database, where did you first learn about influx DB? >>Yeah, correct. So I work with the data management team, uh, and my first project was the record metrics that measured the performance of our software, uh, the software that we used to process the data. So I started implementing that in a relational database. Um, but then I realized that in fact, I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found influx B. And that was, uh, back in 2018. The another use for influx DB that I'm also interested is the visits database. Um, if you think about the observations we are moving the telescope all the time in pointing to specific directions, uh, in the Skype and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, uh, we call a visit. So we want to record the metadata about those visits and flex to, uh, that time here is going to be 10 years long, um, with about, uh, 1000 points every night. It's actually not too much data compared to other, other problems. It's, uh, really just a different, uh, time scale. >>The telescope at the Ruben observatory is like pun intended, I guess the star of the show. And I, I believe I read that it's gonna be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hub's widest camera view, which is amazing, right? That's like 40 moons in, in an image amazingly fast as well. What else can you tell us about the telescope? >>Um, this telescope, it has to move really fast and it also has to carry, uh, the primary mirror, which is an eight meter piece of glass. It's very heavy and it has to carry a camera, which has about the size of a small car. And this whole structure weighs about 300 tons for that to work. Uh, the telescope needs to be, uh, very compact and stiff. Uh, and one thing that's amazing about it's design is that the telescope, um, is 300 tons structure. It sits on a tiny film of oil, which has the diameter of, uh, human hair. And that makes an almost zero friction interface. In fact, a few people can move these enormous structure with only their hands. Uh, as you said, uh, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, uh, in diameter the size of about seven full moons. And, uh, with that, we can map the entire sky in only, uh, three days. And of course doing operations everything's, uh, controlled by software and it is automatic. Um there's a very complex piece of software, uh, called the scheduler, which is responsible for moving the telescope, um, and the camera, which is, uh, recording 15 terabytes of data every night. >>Hmm. And, and, and Angela, all this data lands in influx DB. Correct. And what are you doing with, with all that data? >>Yeah, actually not. Um, so we are using flex DB to record engineering data and metadata about the observations like telemetry events and commands from the telescope. That's a much smaller data set compared to the images, but it is still challenging because, uh, you, you have some high frequency data, uh, that the system needs to keep up and we need to, to start this data and have it around for the lifetime of the price. Mm, >>Got it. Thank you. Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher size satellites. You're kind of using a multi-tenant model. I think it's genius, but, but tell us about the satellites themselves. >>Yeah, absolutely. So, uh, we have in space, some satellites already that as you said, are like dishwasher, mini fridge kind of size. Um, and we're working on a bunch more that are, you know, a variety of sizes from shoebox to, I guess, a few times larger than what we have today. Uh, and it is, we do shoot to have effectively something like a multi-tenant model where, uh, we will buy a bus off the shelf. The bus is, uh, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power. It has the solar panels, it has some radios attached to it. Uh, it handles the attitude control, basically steers the spacecraft in orbit. And then we build also in house, what we call our payload hub, which is, has all, any customer payloads attached and our own kind of edge processing sort of capabilities built into it. >>And, uh, so we integrate that. We launch it, uh, and those things, because they're in lower orbit, they're orbiting the earth every 90 minutes. That's, you know, seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have, uh, one of the unique challenges of operating spacecraft and lower orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time, uh, where we get to talk to them through our ground sites, either in Antarctica or, you know, in the north pole region. >>Talk more about how you use influx DB to make sense of this data through all this tech that you're launching into space. >>We basically previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was, uh, so slow in the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. Uh, so we migrated to influx DB to store our time series telemetry from the spacecraft. So, you know, that's things like, uh, power level voltage, um, currents counts, whatever, whatever metadata we need to monitor about the spacecraft. We now store that in, uh, in influx DB. Uh, and that has, you know, now we can actually easily store the entire volume of data for the mission life so far without having to worry about, you know, the size bloating to an unmanageable amount. >>And we can also seamlessly query, uh, large chunks of data. Like if I need to see, you know, for example, as an operator, I might wanna see how my, uh, battery state of charge is evolving over the course of the year. I can have a plot and an influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent, um, I can intelligently group the data by, uh, sliding time interval. Uh, so, you know, it's been extremely powerful for us to access the data and, you know, as time has gone on, we've gradually migrated more and more of our operating data into influx. >>You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, a lot of companies say, oh, yes, we're data driven, but you guys really are. I mean, you' got data at the core, Caleb, what does that, what does that mean to you? >>Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astro where our engineer's feedback loop went from, you know, a lot of kind of slow researching, digging into the data to like an instant instantaneous, almost seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. Um, but to give another practical example, uh, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all of that data almost instantaneously and provide it to the operator. And near real time, you know, about a second worth of latency is all that's acceptable for us to react to, to see what is coming down from the spacecraft and building that pipeline is challenging from a software engineering standpoint. >>Um, our primary language is Python, which isn't necessarily that fast. So what we've done is started, you know, in the, in the goal of being data driven is publish metrics on individual, uh, how individual pieces of our data processing pipeline are performing into influx as well. And we do that in production as well as in dev. Uh, so we have kind of a production monitoring, uh, flow. And what that has done is allow us to make intelligent decisions on our software development roadmap, where it makes the most sense for us to, uh, focus our development efforts in terms of improving our software efficiency. Uh, just because we have that visibility into where the real problems are. Um, it's sometimes we've found ourselves before we started doing this kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. Uh, but now, now that we're being a bit more data driven, there we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale to, from supporting a couple satellites, to supporting many, many satellites at >>Once. Yeah. Coach. So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means to, to you and your teams? >>I would say that, um, having, uh, real time visibility, uh, to the telemetry data and, and metrics is, is, is crucial for us. We, we need, we need to make sure that the image that we collect with the telescope, uh, have good quality and, um, that they are within the specifications, uh, to meet our science goals. And so if they are not, uh, we want to know that as soon as possible and then, uh, start fixing problems. >>Caleb, what are your sort of event, you know, intervals like? >>So I would say that, you know, as of today on the spacecraft, the event, the, the level of timing that we deal with probably tops out at about, uh, 20 Hertz, 20 measurements per second on, uh, things like our, uh, gyroscopes, but the, you know, I think the, the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give an example, uh, from when I worked at, on the rocket at Astra there, our baseline data rate that we would ingest data during a test is, uh, 500 Hertz. So 500 samples per second. And in some cases we would actually, uh, need to ingest much higher rate data, even up to like 1.5 kilohertz. So, uh, extremely, extremely high precision, uh, data there where timing really matters a lot. And, uh, you know, I can, one of the really powerful things about influx is the fact that it can handle this. >>That's one of the reasons we chose it, uh, because there's times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job, we often zoom out to look, look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second. And you need to see same thing as Angela just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, Hey, I opened this valve at exactly this time and that goes, we wanna have that at, you know, micro or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, was that before or after this valve open, those kind of, uh, that kind of visibility is critical in these kind of scientific, uh, applications and absolutely game changing to be able to see that in, uh, near real time and, uh, with a really easy way for engineers to be able to visualize this data themselves without having to wait for, uh, software engineers to go build it for them. >>Can the scientists do self-serve or are you, do you have to design and build all the analytics and, and queries for your >>Scientists? Well, I think that's, that's absolutely from, from my perspective, that's absolutely one of the best things about influx and what I've seen be game changing is that, uh, generally I'd say anyone can learn to use influx. Um, and honestly, most of our users might not even know they're using influx, um, because what this, the interface that we expose to them is Grafana, which is, um, a generic graphing, uh, open source graphing library that is very similar to influx own chronograph. Sure. And what it does is, uh, let it provides this, uh, almost it's a very intuitive UI for building your queries. So you choose a measurement and it shows a dropdown of available measurements. And then you choose a particular, the particular field you wanna look at. And again, that's a dropdown, so it's really easy for our users to discover. And there's kind of point and click options for doing math aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality of the influx provides putting >>Data in the hands of those, you know, who have the context of domain experts is, is key. Angela, is it the same situation for you? Is it self serve? >>Yeah, correct. Uh, as I mentioned before, um, we have the astronomers making their own dashboards because they know what exactly what they, they need to, to visualize. Yeah. I mean, it's all about using the right tool for the job. I think, uh, for us, when I joined the company, we weren't using influx DB and we, we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations >>Guys. This has been really formative it's, it's pretty exciting to see how the edge is mountaintops, lower orbits to be space is the ultimate edge. Isn't it. I wonder if you could answer two questions to, to wrap here, you know, what comes next for you guys? Uh, and is there something that you're really excited about that, that you're working on Caleb, maybe you could go first and an Angela, you can bring us home. >>Uh, basically what's next for loft. Orbital is more, more satellites, a greater push towards infrastructure and really making, you know, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, uh, making that happen, it's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole, because there are so many interesting applications out there. So many cool ways of leveraging space that, uh, people are taking advantage of. And with, uh, companies like SpaceX and the now rapidly lowering cost, cost of launch, it's just a really exciting place to be. And we're launching more satellites. We are scaling up for some constellations and our ground system has to be improved to match. So there's a lot of, uh, improvements that we're working on to really scale up our control software, to be best in class and, uh, make it capable of handling such a large workload. So >>You guys hiring >><laugh>, we are absolutely hiring. So, uh, I would in we're we need, we have PE positions all over the company. So, uh, we need software engineers. We need people who do more aerospace, specific stuff. So, uh, absolutely. I'd encourage anyone to check out the loft orbital website, if there's, if this is at all interesting. >>All right. Angela, bring us home. >>Yeah. So what's next for us is really, uh, getting this, um, telescope working and collecting data. And when that's happen is going to be just, um, the Lu of data coming out of this camera and handling all, uh, that data is going to be really challenging. Uh, yeah. I wanna wanna be here for that. <laugh> I'm looking forward, uh, like for next year we have like an important milestone, which is our, um, commissioning camera, which is a simplified version of the, of the full camera it's going to be on sky. And so yeah, most of the system has to be working by them. >>Nice. All right, guys, you know, with that, we're gonna end it. Thank you so much, really fascinating, and thanks to influx DB for making this possible, really groundbreaking stuff, enabling value creation at the edge, you know, in the cloud and of course, beyond at the space. So really transformational work that you guys are doing. So congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave ante, and you're watching the cube, the leader in high tech enterprise coverage. >>Welcome Telegraph is a popular open source data collection. Agent Telegraph collects data from hundreds of systems like IOT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists to large corporate teams. The Telegraph project has a very welcoming and active open source community. Learn how to get involved by visiting the Telegraph GitHub page, whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraph. We'd love to hear what you're building. >>Thanks for watching. Moving the world with influx DB made possible by influx data. I hope you learn some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you wanna scale cost effectively with the highest performance and you're analyzing metrics and data over time times, series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link and the resources below. Remember all these recordings are gonna be available on demand of the cube.net and influx data.com. So check those out and poke around influx data. They are the folks behind InfluxDB and one of the leaders in the space, we hope you enjoyed the program. This is Dave Valante for the cube. We'll see you soon.

Published Date : May 12 2022

SUMMARY :

case that anyone can relate to and you can build timestamps into Now, the problem with the latter example that I just gave you is that you gotta hunt As I just explained, we have an exciting program for you today, and we're And then we bring it back here Thanks for coming on. What is the story? And, and he basically, you know, from my point of view, he invented modern time series, Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people relational database is the one database to rule the world. And then you get the data lake. So And so you get to these applications Isn't good enough when you need real time. It's like having the feature for, you know, you buy a new television, So this is a big part of how we're seeing with people saying, Hey, you know, And so you get the dynamic of, you know, of constantly instrumenting watching the What are you seeing for your, with in, with influx DB, So a lot, you know, Tesla, lucid, motors, Cola, You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary And so the developer, So let's get to the developer real quick, real highlight point here is the data. So to a degree that you are moving your service, So when you bring in kind of old way, new way old way was you know, the best of the open source world. They have faster time to market cuz they're assembling way faster and they get to still is what we like to think of it. I mean systems, uh, uh, systems have consequences when you make changes. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in So I'll have to ask you if I'm the customer. Because now I have to make these architectural decisions, as you mentioned, And so that's what you started building. And since I have a PO for you and a big check, yeah. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build What would you say to someone looking to do something in time series on edge? in the build business of building systems that you want 'em to be increasingly intelligent, Brian Gilmore director of IOT and emerging technology that influx day will join me. So you can focus on the Welcome to the show. Sort of, you know, riding along with them is they're successful. Now, you go back since 20 13, 14, even like five years ago that convergence of physical And I think, you know, those, especially in the OT and on the factory floor who weren't able And I think I, OT has been kind of like this thing for OT and, you know, our client libraries and then working hard to make our applications, leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, What are some of the, um, soundbites you hear from customers when they're successful? machines that go deep into the earth to like drill tunnels for, for, you know, I personally think that's a hot area because I think if you look at AI right all of the things you need to do with that data in stream, um, before it hits your sort of central repository. So you have that whole CEO perspective, but he brought up this notion that You can start to compare asset to asset, and then you can do those things like we talked about, So in this model you have a lot of commercial operations, industrial equipment. And I think, you know, we are, we're building some technology right now. like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform How do you view view that? Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, There's so much data out there now. that data from point a to point B and you know, to process it correctly so that the end And, and the democratization is the benefit. allow them to just port to us, you know, directly from the applications and the languages Thanks for sharing all, all the complexities and, and IOT that you Well, thank any, any last word you wanna share No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, You're gonna hear more about that in the next segment, too. the moment that you can look at to kind of see the state of what's going on. And we often point to influx as a solution Tell us about loft Orbi and what you guys do to attack that problem. So that it's almost as simple as, you know, We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers and I knew, you know, I want to be in the space industry. famous woman scientist, you know, galaxy guru. And we are going to do that for 10 so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, Um, the dark energy survey So it seems like you both, you know, your organizations are looking at space from two different angles. something the nice thing about InfluxDB is that, you know, it's so easy to deploy. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it Um, if you think about the observations we are moving the telescope all the And I, I believe I read that it's gonna be the first of the next Uh, the telescope needs to be, And what are you doing with, compared to the images, but it is still challenging because, uh, you, you have some Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher and we're working on a bunch more that are, you know, a variety of sizes from shoebox sites, either in Antarctica or, you know, in the north pole region. Talk more about how you use influx DB to make sense of this data through all this tech that you're launching of data for the mission life so far without having to worry about, you know, the size bloating to an Like if I need to see, you know, for example, as an operator, I might wanna see how my, You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, And near real time, you know, about a second worth of latency is all that's acceptable for us to react you know, in the, in the goal of being data driven is publish metrics on individual, So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means And so if they are not, So I would say that, you know, as of today on the spacecraft, the event, so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, the particular field you wanna look at. Data in the hands of those, you know, who have the context of domain experts is, issues of the database growing to an incredible size extremely quickly, and being two questions to, to wrap here, you know, what comes next for you guys? a greater push towards infrastructure and really making, you know, So, uh, we need software engineers. Angela, bring us home. And so yeah, most of the system has to be working by them. at the edge, you know, in the cloud and of course, beyond at the space. involved by visiting the Telegraph GitHub page, whether you want to contribute code, and one of the leaders in the space, we hope you enjoyed the program.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

JohnPERSON

0.99+

AngelaPERSON

0.99+

EvanPERSON

0.99+

2015DATE

0.99+

SpaceXORGANIZATION

0.99+

2016DATE

0.99+

Dave ValantePERSON

0.99+

AntarcticaLOCATION

0.99+

BoeingORGANIZATION

0.99+

CalebPERSON

0.99+

10 yearsQUANTITY

0.99+

ChileLOCATION

0.99+

BrianPERSON

0.99+

AmazonORGANIZATION

0.99+

Evan KaplanPERSON

0.99+

Aaron SeleyPERSON

0.99+

Angelo FasiPERSON

0.99+

2013DATE

0.99+

PaulPERSON

0.99+

TeslaORGANIZATION

0.99+

2018DATE

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two questionsQUANTITY

0.99+

Caleb McLaughlinPERSON

0.99+

40 moonsQUANTITY

0.99+

two systemsQUANTITY

0.99+

twoQUANTITY

0.99+

AngeloPERSON

0.99+

230QUANTITY

0.99+

300 tonsQUANTITY

0.99+

threeQUANTITY

0.99+

500 HertzQUANTITY

0.99+

3.2 gigQUANTITY

0.99+

15 terabytesQUANTITY

0.99+

eight meterQUANTITY

0.99+

two practitionersQUANTITY

0.99+

20 HertzQUANTITY

0.99+

25 yearsQUANTITY

0.99+

TodayDATE

0.99+

Palo AltoLOCATION

0.99+

PythonTITLE

0.99+

OracleORGANIZATION

0.99+

Paul dicksPERSON

0.99+

FirstQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

earthLOCATION

0.99+

240 peopleQUANTITY

0.99+

three daysQUANTITY

0.99+

appleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

HBIORGANIZATION

0.99+

Dave LANPERSON

0.99+

todayDATE

0.99+

each imageQUANTITY

0.99+

next yearDATE

0.99+

cube.netOTHER

0.99+

InfluxDBTITLE

0.99+

oneQUANTITY

0.98+

1000 pointsQUANTITY

0.98+

Moving The World With InfluxDB


 

(upbeat music) >> Okay, we're now going to go into the customer panel. And we'd like to welcome Angelo Fausti, who's software engineer at the Vera C Rubin Observatory, and Caleb Maclachlan, who's senior spacecraft operations software engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss folks, this interview. Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. Cause doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem? >> Yeah, absolutely. And thanks for having me here, by the way. So Loft Orbital is a company that's a series B startup now. And our mission basically is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have big software teams, and then eventually worry about a lot of very specialized engineering. And what we're trying to do is, change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP, as getting your programs, your mission deployed on orbit, with access to different sensors, cameras, radios, stuff like that. So that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. There's a really cool company called Totum labs, who is working on building an IoT constellation, for Internet of Things. Basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT, which means you have this little modem inside a container. A container that you track from anywhere on the world as it's going across the ocean. So it's really little. And they've been able to stay small startup that's focused on their product, which is that super crazy, complicated, cool radio, while we handle the whole space segment for them, which just, before Loft was really impossible. So that's our mission is, providing space infrastructure as a service. We are kind of groundbreaking in this area, and we're serving a huge variety of customers with all kinds of different missions, and obviously, generating a ton of data in space that we've got to handle. >> Yeah, so amazing, Caleb, what you guys do. I know you were lured to the skies very early in your career, but how did you kind of land in this business? >> Yeah, so I guess just a little bit about me. For some people, they don't necessarily know what they want to do, early in their life. For me, I was five years old and I knew, I want to be in the space industry. So I started in the Air Force, but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of, actually. So I've kind of started out in satellites, did spend some time in working in the launch industry on rockets. Now I'm here back in satellites. And honestly, this is the most exciting of the different space startups that I've been a part of. So, always been passionate about space and basically writing software for operating in space for basically extending how we write software into orbit. >> Super interesting. Okay, Angelo. Let's talk about the Rubin Observatory Vera C. Rubin, famous woman scientists, Galaxy guru, Now you guys, the observatory are up, way up high, you're going to get a good look at the southern sky. I know COVID slowed you guys down a bit. But no doubt you continue to code away on the software. I know you're getting close. You got to be super excited. Give us the update on the observatory and your role. >> All right. So yeah, Rubin is state of the art observatory that is in construction on a remote mountain in Chile. And with Rubin we'll conduct the large survey of space and time. We are going to observe the sky with eight meter optical telescope and take 1000 pictures every night with 3.2 gigapixel camera. And we're going to do that for 10 years, which is the duration of the survey. The goal is to produce an unprecedented data set. Which is going to be about .5 exabytes of image data. And from these images will detect and measure the properties of billions of astronomical objects. We are also building a science platform that's hosted on Google Cloud, so that the scientists and the public can explore this data to make discoveries. >> Yeah, amazing project. Now, you aren't a Doctor of Philosophy. So you probably spent some time thinking about what's out there. And then you went on to earn a PhD in astronomy and astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >> Yeah, that's right. About 15 years. I studied physics in college, then I got a PhD in astronomy. And I worked for about five years in another project, the Dark Energy survey before joining Rubin in 2015. >> Yeah, impressive. So it seems like both your organizations are looking at space from two different angles. One thing you guys both have in common, of course, is software. And you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb, you can start. >> Yeah, absolutely. So the first company that I extensively used InfluxDB in was a launch startup called Astra. And we were in the process of designing our first generation rocket there and testing the engines, pumps. Everything that goes into a rocket. And when I joined the company, our data story was not very mature. We were collecting a bunch of data in LabVIEW. And engineers were taking that over to MATLAB to process it. And at first, that's the way that a lot of engineers and scientists are used to working. And at first that was, like, people weren't entirely sure that, that needed to change. But it's something, the nice thing about InfluxDB is that, it's so easy to deploy. So our software engineering team was able to get it deployed and up and running very quickly and then quickly also backport all of the data that we've collected thus far into Influx. And what was amazing to see and it's kind of the super cool moment with Influx is, when we hooked that up to Grafana, Grafana, is the visualization platform we use with influx, because it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data, where they could just almost instantly, easily discover data that they hadn't been able to see before. And take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming. And I saw them implementing crazy rocket equation type stuff in Influx and it just was totally game changing for how we tested. And things that previously it would be like run a test, then wait an hour for the engineers to crunch the data and then we run another test with some changed parameters or a changed startup sequence or something like that, became, by the time the test is over, the engineers know what the next step is, because they have this just like instant game changing access to data. So since that experience, basically everywhere I've gone, every company since then, I've been promoting InfluxDB and using it and spinning it up and quickly showing people how simple and easy it is. >> Yeah, thank you. So Angelo, I was explaining in my open that, you know you could add a column in a traditional RDBMS and do time series. But with the volume of data that you're talking about in the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? >> Yeah, correct. So I worked with the data management team and my first project was the record metrics that measure the performance of our software. The software that we use to process the data. So I started implementing that in our relational database. But then I realized that in fact, I was dealing with time series data. And I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB, that was back in 2018. Then I got involved in another project. To record telemetry data from the telescope itself. It's very challenging because you have so many subsystems and sensors, producing data. And with that data, the goal is to look at the telescope harder in real time so we can make decisions and make sure that everything's doing the right thing. And another use for InfluxDB that I'm also interested, is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. And every point in the time series, we call that visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long, with about 1000 points every night. It's actually not too much data compared to the other problems. It's really just the different time scale. So yeah, we have plans on continuing using InfluxDB and finding new applications in the project. >> Yeah and the speed with which you can actually get high quality images. Angelo, my understanding is, you use InfluxDB, as you said, you're monitoring the telescope hardware and the software. And just say, some of the scientific data as well. The telescope at the Rubin Observatory is like, no pun intended, I guess, the star of the show. And I believe, I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. That's like 40 moons in an image, and amazingly fast as well. What else can you tell us about the telescope? >> Yeah, so it's really a challenging project, from the point of view of engineering. This telescope, it has to move really fast. And it also has to carry the primary mirror, which is an eight meter piece of glass, it's very heavy. And it has to carry a camera, which is about the size of a small car. And this whole structure weighs about 300 pounds. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about its design is that the telescope, this 300 tons structure, it sits on a tiny film of oil, which has the diameter of human hair, in that brings an almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, in diameter, the size of about seven full moons. And with that we can map the entire sky in only three days. And of course, during operations, everything's controlled by software, and it's automatic. There's a very complex piece of software called the scheduler, which is responsible for moving the telescope and the camera. Which will record the 15 terabytes of data every night. >> And Angelo, all this data lands in InfluxDB, correct? And what are you doing with all that data? >> Yeah, actually not. So we're using InfluxDB to record engineering data and metadata about the observations, like telemetry events and the commands from the telescope. That's a much smaller data set compared to the images. But it is still challenging because you have some high frequency data that the system needs to keep up and we need to store this data and have it around for the lifetime of the project. >> Hm. So at the mountain, we keep the data for 30 days. So the observers, they use Influx and InfluxDB instance, running there to analyze the data. But we also replicate the data to another instance running at the US data facility, where we have more computational resources and so more people can look at the data without interfering with the observations. Yeah, I have to say that InfluxDB has been really instrumental for us, and especially at this phase of the project where we are testing and integrating the different pieces of hardware. And it's not just the database, right. It's the whole platform. So I like to give this example, when we are doing this kind of task, it's hard to know in advance which dashboards and visualizations you're going to need, right. So what you really need is a data exploration tool. And with tools like chronograph, for example, having the ability to query and create dashboards on the fly was really a game changer for us. So astronomers, they typically are not software engineers, but they are the ones that know better than anyone, what needs to be monitored. And so they use chronograph and they can create the dashboards and the visualizations that they need. >> Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about, you got these dishwasher size satellites are kind of using a multi tenant model. I think it's genius. But tell us about the satellites themselves. >> Yeah, absolutely. So we have in space, some satellites already. That, as you said, are like dishwasher, mini fridge kind of size. And we're working on a bunch more that are a variety of sizes from shoe box to I guess, a few times larger than what we have today. And it is, we do shoot to have, effectively something like a multi tenant model where we will buy a bus off the shelf, the bus is, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something. Where it's providing the power, it has the solar panels, it has some radios attached to it, it handles the altitude control, basically steers the spacecraft in orbit. And then we build, also in house, what we call our payload hub, which is has all any customer payloads attached, and our own kind of edge processing sort of capabilities built into it. And so we integrate that, we launch it, and those things, because they're in low Earth orbit, they're orbiting the Earth every 90 minutes. That's seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have one of the unique challenges of operating spacecraft in lower Earth orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time. Where we get to talk to them through our ground sites, either in Antarctica or in the North Pole region. So we'll see them for 10 minutes, and then we won't see them for the next 90 minutes as they zip around the Earth collecting data. So one of the challenges that exists for a company like ours is, that's a lot of, you have to be able to make real time decisions operationally, in those short windows that can sometimes be critical to the health and safety of the spacecraft. And it could be possible that we put ourselves into a low power state in the previous orbit or something potentially dangerous to the satellite can occur. And so as an operator, you need to very quickly process that data coming in. And not just the the live data, but also the massive amounts of data that were collected in, what we call the back orbit, which is the time that we couldn't see the spacecraft. >> We got it. So talk more about how you use InfluxDB to make sense of this data from all those tech that you're launching into space. >> Yeah, so we basically, previously we started off, when I joined the company, storing all of that, as Angelo did, in a regular relational database. And we found that it was so slow, and the size of our data would balloon over the course of a couple of days to the point where we weren't able to even store all of the data that we were getting. So we migrated to InfluxDB to store our time series telemetry from the spacecraft. So that thing's like power level voltage, currents counts, whatever metadata we need to monitor about the spacecraft, we now store that in InfluxDB. And that has, you know, now we can actually easily store the entire volume of data for the mission life so far, without having to worry about the size bloating to an unmanageable amount. And we can also seamlessly query large chunks of data, like if I need to see, for example, as an operator, I might want to see how my battery state of charge is evolving over the course of the year, I can have a plot in an Influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent. I can intelligently group the data by citing time interval. So it's been extremely powerful for us to access the data. And as time has gone on, we've gradually migrated more and more of our operating data into Influx. So not only do we store the basic telemetry about the bus and our payload hub, but we're also storing data for our customers, that our customers are generating on board about things like you know, one example of a customer that's doing something pretty cool. They have a computer on our satellite, which they can reprogram themselves to do some AI enabled edge compute type capability in space. And so they're sending us some metrics about the status of their workloads, in addition to the basics, like the temperature of their payload, their computer or whatever else. And we're delivering that data to them through Influx in a Grafana dashboard that they can plot where they can see, not only has this pipeline succeeded or failed, but also where was the spacecraft when this occurred? What was the voltage being supplied to their payload? Whatever they need to see, it's all right there for them. Because we're aggregating all that data in InfluxDB. >> That's awesome. You're measuring everything. Let's talk a little bit about, we throw this term around a lot, data driven. A lot of companies say, Oh, yes, we're data driven. But you guys really are. I mean, you got data at the core. Caleb, what does that what does that mean to you? >> Yeah, so you know, I think, the clearest example of when I saw this, be like totally game changing is, what I mentioned before it, at Astra, were our engineers feedback loop went from a lot of, kind of slow researching, digging into the data to like an instant, instantaneous, almost, Seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. But to give another practical example, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all that data almost instantaneously and provide it to the operator in near real time. About a second worth of latency is all that's acceptable for us to react to. To see what is coming down from the spacecraft and building that pipeline is challenging, from a software engineering standpoint. Our primary language is Python, which isn't necessarily that fast. So what we've done is started, in the in the goal being data driven, is publish metrics on individual, how individual pieces of our data processing pipeline, are performing into Influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow. And what that has done is, allow us to make intelligent decisions on our software development roadmap. Where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency, just because we have that visibility into where the real problems are. At sometimes we've found ourselves, before we started doing this, kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now, that we're being a bit more data driven, there, we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scaled from supporting a couple of satellites to supporting many, many satellites at once. >> So you reduce those dead ends. Maybe Angela, you could talk about what sort of data driven means to you and your team? >> Yeah, I would say that having real time visibility, to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect, with the telescope have good quality and that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible, and then start fixing problems. >> Yeah, so I mean, you think about these big science use cases, Angelo. They are extremely high precision, you have to have a lot of granularity, very tight tolerances. How does that play into your time series data strategy? >> Yeah, so one of the subsystems that produce the high volume and high rates is the structure that supports the telescope's primary mirror. So on that structure, we have hundreds of actuators that compensate the shape of the mirror for the formations. That's part of our active updated system. So that's really real time. And we have to record this high data rates, and we have requirements to handle data that are a few 100 hertz. So we can easily configure our database with milliseconds precision, that's for telemetry data. But for events, sometimes we have events that are very close to each other and then we need to configure database with higher precision. >> um hm For example, micro seconds. >> Yeah, so Caleb, what are your event intervals like? >> So I would say that, as of today on the spacecraft, the event, the level of timing that we deal with probably tops out at about 20 hertz, 20 measurements per second on things like our gyroscopes. But I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give you an example, from when I worked on the rockets at Astra. There, our baseline data rate that we would ingest data during a test is 500 hertz, so 500 samples per second. And in some cases, we would actually need to ingest much higher rate data. Even up to like 1.5 kilohertz. So extremely, extremely high precision data there, where timing really matters a lot. And, I can, one of the really powerful things about Influx is the fact that it can handle this, that's one of the reasons we chose it. Because there's times when we're looking at the results of firing, where you're zooming in. I've talked earlier about how on my current job, we often zoom out to look at a year's worth of data. You're zooming in, to where your screen is preoccupied by a tiny fraction of a second. And you need to see, same thing, as Angelo just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, hey, I opened this valve at exactly this time. And that goes, we want to have that at micro or even nanosecond precision, so that we know, okay, we saw a spike in chamber pressure at this exact moment, was that before or after this valve open? That kind of visibility is critical in these kinds of scientific applications and absolutely game changing, to be able to see that in near real time. And with a really easy way for engineers to be able to visualize this data themselves without having to wait for us software engineers to go build it for them. >> Can the scientists do self serve? Or do you have to design and build all the analytics and queries for scientists? >> I think that's absolutely from my perspective, that's absolutely one of the best things about Influx, and what I've seen be game changing is that, generally, I'd say anyone can learn to use Influx. And honestly, most of our users might not even know they're using Influx. Because the interface that we expose to them is Grafana, which is generic graphing, open source graphing library that is very similar to Influx zone chronograph. >> Sure. >> And what it does is, it provides this, almost, it's a very intuitive UI for building your query. So you choose a measurement, and it shows a drop down of available measurements, and then you choose the particular field you want to look at. And again, that's a drop down. So it's really easy for our users to discover it. And there's kind of point and click options for doing math, aggregations. You can even do like, perfect kind of predictions all within Grafana. The Grafana user interface, which is really just a wrapper around the API's and functionality that Influx provides. So yes, absolutely, that's been the most powerful thing about it, is that it gets us out of the way, us software engineers, who may not know quite as much as the scientists and engineers that are closer to the interesting math. And they build these crazy dashboards that I'm just like, wow, I had no idea you could do that. I had no idea that, that is something that you would want to see. And absolutely, that's the most empowering piece. >> Yeah, putting data in the hands of those who have the context, the domain experts is key. Angelo is it the same situation for you? Is it self serve? >> Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards, because they know exactly what they need to visualize. And I have an example just from last week. We had an engineer at the observatory that was building a dashboard to monitor the cooling system of the entire building. And he was familiar with InfluxQL, which was the primarily query language in version one of InfluxDB. And he had, that was really a challenge because he had all the data spread at multiple InfluxDB measurements. And he was like doing one query for each measurement and was not able to produce what he needed. And then, but that's the perfect use case for Flux, which is the new data scripting language that Influx data developed and introduced as the main language in version two. And so with Flux, he was able to combine data from multiple measurements and summarize this data in a nice table. So yeah, having more flexible and powerful language, also allows you to make better a visualization. >> So Angelo, where would you be without time series database, that technology generally, may be specifically InfluxDB, as one of the leading platforms. Would you be able to do this? >> Yeah, it's hard to imagine, doing what we are doing without InfluxDB. And I don't know, perhaps it would be just a matter of time to rediscover InfluxDB. >> Yeah. How about you Caleb? >> Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company, we weren't using InfluxDB and we were dealing with serious issues of the database growing to a an incredible size, extremely quickly. And being unable to, like even querying short periods of data, was taking on the order of seconds, which is just not possible for operations. So time series database is, if you're dealing with large volumes of time series data, Time series database is the right tool for the job and Influx is a great one for it. So, yeah, it's absolutely required to use for this kind of data, there is not really any other option. >> Guys, this has been really informative. It's pretty exciting to see, how the edge is mountain tops, lower Earth orbits. Space is the ultimate edge. Isn't it. I wonder if you could two questions to wrap here. What comes next for you guys? And is there something that you're really excited about? That you're working on. Caleb, may be you could go first and than Angelo you could bring us home. >> Yeah absolutely, So basically, what's next for Loft Orbital is more, more satellites a greater push towards infrastructure and really making, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, making that happen. It's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole. Because there are so many interesting applications out there. So many cool ways of leveraging space that people are taking advantage of and with companies like SpaceX, now rapidly lowering cost of launch. It's just a really exciting place to be in. And we're launching more satellites. We're scaling up for some constellations and our ground system has to be improved to match. So there is a lot of improvements that we are working on to really scale up our control systems to be best in class and make it capable of handling such large workloads. So, yeah. What's next for us is just really 10X ing what we are doing. And that's extremely exciting. >> And anything else you are excited about? Maybe something personal? Maybe, you know, the titbit you want to share. Are you guys hiring? >> We're absolutely hiring. So, we've positions all over the company. So we need software engineers. We need people who do more aerospace specific stuff. So absolutely, I'd encourage anyone to check out the Loft Orbital website, if this is at all interesting. Personal wise, I don't have any interesting personal things that are data related. But my current hobby is sea kayaking, so I'm working on becoming a sea kayaking instructor. So if anyone likes to go sea kayaking out in the San Francisco Bay area, hopefully I'll see you out there. >> Love it. All right, Angelo, bring us home. >> Yeah. So what's next for us is, we're getting this telescope working and collecting data and when that's happened, it's going to be just a delish of data coming out of this camera. And handling all that data, is going to be a really challenging. Yeah, I wonder I might not be here for that I'm looking for it, like for next year we have an important milestone, which is our commissioning camera, which is a simplified version of the full camera, is going to be on sky and so most of the system has to be working by then. >> Any cool hobbies that you are working on or any side project? >> Yeah, actually, during the pandemic I started gardening. And I live here in Two Sun, Arizona. It gets really challenging during the summer because of the lack of water, right. And so, we have an automatic irrigation system at the farm and I'm trying to develop a small system to monitor the irrigation and make sure that our plants have enough water to survive. >> Nice. All right guys, with that we're going to end it. Thank you so much. Really fascinating and thanks to InfluxDB for making this possible. Really ground breaking stuff, enabling value at the edge, in the cloud and of course beyond, at the space. Really transformational work, that you guys are doing. So congratulations and I really appreciate the broader community. I can't wait to see what comes next from this entire eco system. Now in the moment, I'll be back to wrap up. This is Dave Vallante. And you are watching The cube, the leader in high tech enterprise coverage. (upbeat music)

Published Date : Apr 21 2022

SUMMARY :

and what you guys do of the kind of customer that we can serve. Caleb, what you guys do. So I started in the Air Force, code away on the software. so that the scientists and the public for the better part of the Dark Energy survey And you both use InfluxDB and it's kind of the super in the example that Caleb just gave, the goal is to look at the of the next gen telescopes to come online. the telescope needs to be that the system needs to keep up And it's not just the database, right. Okay, Caleb, let's bring you back in. the bus is, what you can kind of think of So talk more about how you use InfluxDB And that has, you know, does that mean to you? digging into the data to like an instant, means to you and your team? the images that we collect, I mean, you think about these that produce the high volume For example, micro seconds. that's one of the reasons we chose it. that's absolutely one of the that are closer to the interesting math. Angelo is it the same situation for you? And he had, that was really a challenge as one of the leading platforms. Yeah, it's hard to imagine, How about you Caleb? of the database growing Space is the ultimate edge. and to be in this industry as a whole. And anything else So if anyone likes to go sea kayaking All right, Angelo, bring us home. and so most of the system because of the lack of water, right. in the cloud and of course

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AngelaPERSON

0.99+

2015DATE

0.99+

Dave VallantePERSON

0.99+

Angelo FaustiPERSON

0.99+

1000 picturesQUANTITY

0.99+

Loft OrbitalORGANIZATION

0.99+

Caleb MaclachlanPERSON

0.99+

40 moonsQUANTITY

0.99+

500 hertzQUANTITY

0.99+

30 daysQUANTITY

0.99+

ChileLOCATION

0.99+

SpaceXORGANIZATION

0.99+

CalebPERSON

0.99+

2018DATE

0.99+

AntarcticaLOCATION

0.99+

10 yearsQUANTITY

0.99+

15 terabytesQUANTITY

0.99+

San Francisco BayLOCATION

0.99+

EarthLOCATION

0.99+

North PoleLOCATION

0.99+

AngeloPERSON

0.99+

PythonTITLE

0.99+

Vera C. RubinPERSON

0.99+

InfluxTITLE

0.99+

10 minutesQUANTITY

0.99+

3.2 gigapixelQUANTITY

0.99+

InfluxDBTITLE

0.99+

300 tonsQUANTITY

0.99+

two questionsQUANTITY

0.99+

bothQUANTITY

0.99+

Rubin ObservatoryLOCATION

0.99+

last weekDATE

0.99+

each imageQUANTITY

0.99+

1.5 kilohertzQUANTITY

0.99+

first projectQUANTITY

0.99+

eight meterQUANTITY

0.99+

todayDATE

0.99+

next yearDATE

0.99+

Vera C Rubin ObservatoryORGANIZATION

0.99+

AWSORGANIZATION

0.99+

USLOCATION

0.99+

one thingQUANTITY

0.98+

an hourQUANTITY

0.98+

firstQUANTITY

0.98+

first generationQUANTITY

0.98+

oneQUANTITY

0.98+

three ordersQUANTITY

0.98+

one exampleQUANTITY

0.97+

Two Sun, ArizonaLOCATION

0.97+

InfluxQLTITLE

0.97+

hundreds of actuatorsQUANTITY

0.97+

each measurementQUANTITY

0.97+

about 300 poundsQUANTITY

0.97+

Manish Sood, Reltio | AWS re:Invent 2021


 

(upbeat music) >> We're back at AWS reinvent 2021. You're watching The Cube, I'm Dave Vellante with my co-host Dave Nicholson. David Nicholson, I'm Dave he's David. >> We're trying something new here at the cube. A little stand up cube. You've heard of the pop-up cube, maybe. We're going to stand up. I work at a stand, standing desk at my office, so let's try it. Four days, two sets, a hundred plus guests. Why not? So Manish Sood is here, he's the founder and CTO of Reltio, Cube alum. >> Dave: Manish, thank you for standing and good to see you again. >> Dave, It's great to see you again, and David, thank you for having me here. >> So, tell us a little bit about your, yourself, your background. I'm always interested to ask founders why you started your company, but tell us the background. >> Yeah, so a little bit of my background and the company's history. I, most of my background has been in data management and creating products for data management. I was at a company called Informatica, came through an acquisition through Informatica, back in 2010. And Started Reltio in 2011. The reason why we started Reltio was that, if you look at the enterprise space and how things have been evolving, there have been an explosion of applications. There's almost an application for every little business process that you can possibly imagine. Enterprise customers who used to struggle with 12 or 24 different systems, are now coming to us and saying they have 300 or 500 different applications that they use to run their business. And that's at the lower end of the spectrum. Even a business like Reltio today, runs on a hundred plus SAAS applications, end to end. And that it is creating one of the biggest opportunities, as well as one of the biggest friction points in the enterprise. Because in order to create better, efficient business outcomes, you have silos of data and you don't know where the source of truth is. And that is something that we saw early on in 2011. At the same time, we also saw that digital transformation or cloud transformation type of requirements, were going to drive a larger need for this kind of capability, where Reltio type of products could act as that single source of truth to unify all of the multi-source siloed information. So, that's what got us started down this journey. >> So, okay. So, when see people hear single source of truth, they think, oh, database, right? But that's not what you guys do, right? I mean, it's, it's, can I call it master data management? But it's really modern master data management. You're kind of recreating a new or creating a new category that- >> Manish: A little bit. >> solves a similar problem. Maybe you could explain that. >> Yeah. A little bit of background. So the term master data management came about the 1920s. (Dave laughing) You believe that? When during the pandemic, the U.S. government was trying to figure out how to know who is still alive versus, you know, not there anymore. And they created something called the death master. Now a very ominous name, for a concept of just bringing data together and figuring out what's going on in the economy, but that need, or problem hasn't gone away. It has just become a harder problem to solve because now we have so many different systems, to deal with. And both internal as well as third-party data sources that companies have to work with. And that's where the need has been around, but the technical capabilities to really keep solving the problem and delivering the solution in a manner where it can keep pace with the evolving needs, that capability has been missing. And that's where the "aha" moment for us was that we really needed to build it out as a foundation that would continue to grow and scale, with the magnitude of the problem that we were going to see in the future. >> Okay, so this idea of single version of the truth, obviously critically important for reporting, financials, you can't, you can't tell an auditor one thing, you know, your, your customers are another thing, your consumers, it's got to be consistent. And especially in regulated industries. Is there a difference Manish, between sort of that type of data and the data maybe that's in the line of business that doesn't necessarily affect the rest of the business? Can they have their own version of the truth, which is just their version, their, their, their single version? It doesn't necessarily have to affect anything else. Do you, are you seeing that changing data landscape, where things are getting more distributed and ownership is becoming more distributed? >> So, the change in the paradigm that we are seeing is because of the proliferation of the data, there is a need to establish, what is the aggregated view of the information. Aggregated and unified, which means that, you know, if there is a record for Dave Vellante or David Vellante. It's the same person. Establishing that fact as the truth across any number of systems that you have, versus the multiple versions of the truth, where somebody comes in and says, for compliance reasons, I want the entire collection of data versus for marketing reasons, I only want one third the slice of this information. So that's where this concept of aggregate once, unify that information, but then make it ready and available for multiple consumers to partake from that. That's becoming the norm. >> Dave: Got it. >> And you mentioned something, Dave, that analytics, reporting, BI, data science, those have been some of the traditional playgrounds for this kind of information to be unified, because if you're trying to roll up the revenue for, you know, the business that you do with Coke or Coca-Cola, you know, you don't know which name you used, then you have to go back to the analytics warehouse and aggregate all of that information and do the reporting. But the same problem is coming up in real time, digital experiences as well. The only difference is, that instead of having the luxury of a few hours, now you have to make the decision in a few milliseconds. >> So, when you talk about those silos of data and seeking to have a unification of those silos, how has that changed in the era of cloud? Is it that Reltio is integrating those disparate sources that now exist in cloud, or is it that you are leveraging cloud to address the problem that's been with us for a long time? And I have to say that Dave Vellante, take him off the the death master. He's definitely still with us. (Manish and Dave laugh) >> Dave: Another good day. >> I'm pretty sure too. But how, how, how has, how have things changed as you know, with, with the dawn of cloud? >> With the dawn of cloud, there are two things that have become available to us. One is using the power of the cloud compute to solve the problem, so that you can keep growing with the footprint of the problem itself and have a solution that scales along with it. But at the same time, you have systems of record, could be your mainframe systems, could be your SAP, ERP type of deployments that you have. Some of those functional applications, they're not going away anytime soon, they're there to stay. But at the same time, you also need the new digital experiences to be delivered on. The glue between those two worlds is the source of truth data that sits in the middle and the best place for it to sit is the cloud, because you have to open it up to the rest of the ecosystem that sits in the cloud, but you also have to maintain a connection to the on the ground type of systems. Putting it behind the firewall and trying to do that is next to impossible, but doing it in the cloud opens up all the doors that you need for your transformation to take place. >> You know Dave, there was a time when I was part of an industry where coding, not writing code, but coding data to basically say, look, this field here is the person's last name. This field is the address where the mortgage is being held. How much of that is still manual, as opposed to applying some form of AI to the problem? Let's say you have 200 different sources of information, where Dave Vellante's name shows up in a variety of contexts. Are we still having to go in manually and sift through to make those correlations? How much of that has been automated at this point? >> So, there are systems of capture where some of that information, because your loan mortgage application was entered by somebody into a system, will still be captured in those places, but we'll take in that information. That's the starting point, but if there are other sources, then we will apply AIML type of capabilities to bring on those new emerging sources. Because at the same time, think about this equation where, you started with five systems or, you know, a dozen systems. Now you're talking about 300 plus systems. You cannot keep doing this manually for every system possible. And this number is only going to grow as we move forward. So AIML definitely has a role to play and further automate this landscape. >> I had to, I saw an amazing stat the other day, the source was the Sand Hill Econometrics, you know, a Silicon valley company. And the stat was that 70% of the series, A, B and C companies, fail to return at least one X to their investors. So you've made it through that nut hole. Congratulations you just raised $120 million dollar round. That's got to be super exciting for you. >> David: No pressure by the way. >> Dave: Tell us about that. Well, I mean, you'd think the industry would have de-risked by now, right. But anyway, so, tell us about that raise. Where are you, where are you guys are at? Very exciting times for you. >> Yeah, really, really exciting time for us. We just raised $120 million dollars. The company was valued at $1.7 billion dollars. >> Dave: Awesome. Congratulations. >> And the round was, you know, all of our existing investors participated in it. We also had a new investor join in the process, as well. >> Dave: They wanted their pro-rata. (Dave and Manish laugh) >> Everybody, everybody wanted their pro-rata. >> Dave: That's great. >> But you know, one of the things that we have been very careful about in this whole process and journey, is something that you and I were talking about, the step function of scale. We're making sure that we are efficient stewards of capital and applying it in a manner where we are at every turn, looking at what's the next step function that we need to graduate to, because we want to make use of this capital to efficiently grow our business and be a Rule of 40 growth company. And that's something that you don't typically hear these days from a lot of the growth companies, but we are certainly focused on building long-term value and focusing on that Rule of 40 growth efficiency. >> Yeah, so Rule of 40 is growth plus EBITDA, or sometimes they use other metrics, but is that how you look at it? Growth plus EBITDA. >> Yes. Yeah. >> Great. >> And that's the formula that we are driving for. And most of our investments with this round of capital are going to be not only pushing forward with the go-to market strategy, because we have a lot of growth opportunity, we have been North America focused, now we can take this global. At the same time, looking at the verticals where we need to double down and invest more, given that we have been a horizontal platform that is core to our capabilities, that we have built with Reltio. But at the same time, making sure that we are investing in the key verticals that we are present in. >> Yeah. So, you were explaining to me that you, you started in the pharmaceutical industry, that's where you got go to market fit. And then you went to other industries. When you went to those other industries where they're similar patterns, or do you do almost have to start from ground zero again, to get that product market fit? >> No. So from the very beginning, the concept has been that this is a horizontal data problem. And at the heart of it, it's information about people, organizations, product, locations, and most of the businesses run on that type of information. That's the core part of the data that they build their business on. Life sciences was a perfect starting point for us, because it had examples of all of those data. When you start with commercial operations, which is sales and marketing, you have people, organization, product type of information. When you go into clinical trials, you have site investigators and patient type of information. When you go into R and D within that same space, you have drugs, compounds, substances, finished products, type of information, all coming from multiple sources. So it was a perfect place for us to prove out, all of the capabilities end to end, which we like to call multi-domain capabilities. And then we looked at what other verticals have similar patterns. And that's why we went after healthcare, financial services, insurance, retail, high tech. Those are some of the key verticals that we are in right now. >> That's awesome. Great vision. Last question, could you give us a sense of the futures, where you're going? Well, first of all, what are you doing with the money? Is it, you go to market, throwing gas on the fire? And what can we expect in the coming year and years? >> Go to market expansion is a key area of investment, but also doubling down on the customer experience that we deliver, how we invest in the product, what are some of the adjacent capabilities that we need to invest in? Because you know, data is a great starting point and data should not hold businesses back. Data should be the accelerant to the business. And that's our philosophy, that we are trying to bring to life. So making sure that we are making the data, readily available, accessible and usable for all of our customers is the key goal to aim for. And that's where all the investment is going. >> Well, Manish was a pleasure having you on at the AWS startup showcase, and then subsequently you become a unicorn. So congratulations on that. Really excited to watch the continued progress. Thanks for coming back in The Cube. >> Well, thank you so much, Dave and David, thanks for having me. >> David: Thanks for validating that Mr. Vellante is still with us. >> (laughs) He's going to be with us for a long time. >> I hope so, I hope so, I got, I got one more to put through college. Thank you for watching this edition of The Cube, at AWS reinvent. I'm Dave Vellante, for Dave Nicholson. We are The Cube, the leader in high-tech coverage, Be right back. (somber music)

Published Date : Dec 1 2021

SUMMARY :

with my co-host Dave Nicholson. You've heard of the pop-up cube, maybe. and good to see you again. Dave, It's great to see you again, why you started your company, At the same time, we also saw But that's not what you guys do, right? Maybe you could explain that. and delivering the solution in a manner of the business? Establishing that fact as the truth and aggregate all of that how has that changed in the era of cloud? how have things changed as you know, with, But at the same time, you also need This field is the address where Because at the same time, think And the stat was that 70% of the series, But anyway, so, tell us about that raise. The company was valued Dave: Awesome. And the round was, you know, (Dave and Manish laugh) wanted their pro-rata. is something that you but is that how you look And that's the formula that's where you got go to market fit. all of the capabilities end to end, of the futures, where you're going? is the key goal to aim for. at the AWS startup showcase, Well, thank you so that Mr. Vellante is still with us. (laughs) He's going to We are The Cube, the leader

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

InformaticaORGANIZATION

0.99+

David NicholsonPERSON

0.99+

2011DATE

0.99+

12QUANTITY

0.99+

2010DATE

0.99+

$120 millionQUANTITY

0.99+

five systemsQUANTITY

0.99+

70%QUANTITY

0.99+

200 different sourcesQUANTITY

0.99+

Manish SoodPERSON

0.99+

300QUANTITY

0.99+

VellantePERSON

0.99+

Four daysQUANTITY

0.99+

ManishPERSON

0.99+

a dozen systemsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Sand Hill EconometricsORGANIZATION

0.99+

two thingsQUANTITY

0.99+

two setsQUANTITY

0.99+

North AmericaLOCATION

0.99+

ReltioORGANIZATION

0.99+

$1.7 billion dollarsQUANTITY

0.99+

$120 million dollarsQUANTITY

0.99+

Silicon valleyLOCATION

0.98+

two worldsQUANTITY

0.98+

1920sDATE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

Rule of 40OTHER

0.97+

CokeORGANIZATION

0.97+

24 different systemsQUANTITY

0.97+

one thirdQUANTITY

0.97+

500 different applicationsQUANTITY

0.97+

Rule of 40OTHER

0.96+

single sourceQUANTITY

0.95+

The CubeTITLE

0.92+

2021DATE

0.9+

The CubeTITLE

0.9+

a hundred plus guestsQUANTITY

0.9+

Coca-ColaORGANIZATION

0.89+

pandemicEVENT

0.89+

ReltioPERSON

0.89+

single versionQUANTITY

0.89+

todayDATE

0.89+

40QUANTITY

0.87+

InventEVENT

0.87+

U.S. governmentORGANIZATION

0.86+

about 300 plus systemsQUANTITY

0.86+

a hundredQUANTITY

0.8+

at least one XQUANTITY

0.78+

ground zeroQUANTITY

0.77+

Avi Shua, Orca Security | CUBE Conversation May 2021


 

(calm music)- Hello, and welcome to this CUBE conversation here in Palo Alto, California in theCUBE Studios, I'm John Furrier, host of theCUBE. We are here with the hot startup really working on some real, super important security technology for the cloud, great company, Orca Security, Avi Shua, CEO, and co founder. Avi, thank you for coming on theCUBE and share your story >> Thanks for having me. >> So one of the biggest problems that enterprises and large scale, people who are going to the cloud and are in the cloud and are evolving with cloud native, have realized that the pace of change and the scale is a benefit to the organizations for the security teams, and getting that security equation, right, is always challenging, and it's changing. You guys have a solution for that, I really want to hear what you guys are doing. I like what you're talking about. I like what you're thinking about, and you have some potentially new technologies. Let's get into it. So before we get started, talk about what is Orca Security, what do you guys do? What problem do you solve? >> So what we invented in Orca, is a unique technology called site scanning, that essentially enables us to connect to any cloud environment in a way which is as simple as installing a smartphone application and getting a full stack visibility of your security posture, meaning seeing all of the risk, whether it's vulnerability, misconfiguration, lateral movement risk, work that already been compromised, and more and more, literally in minutes without deploying any agent, without running any network scanners, literally with no change. And while it sounds to many of us like it can't happen, it's snake oil, it's simply because we are so used to on premise environment where it simply wasn't possible in physical server, but it is possible in the cloud. >> Yeah, and you know, we've had many (indistinct) on theCUBE over the years. One (indistinct) told us that, and this is a direct quote, I'll find the clip and share it on Twitter, but he said, "The cloud is more secure than on premise, because it's more changes going on." And I asked him, "Okay, how'd you do?" He says, "It's hard, you got to stay on top of it." A lot of people go to the cloud, and they see some security benefits with the scale. But there are gaps. You guys are building something that solves those gaps, those blind spots, because of things are always changing, you're adding more services, sometimes you're integrating, you now have containers that could have, for instance, you know, malware on it, gets introduced into a cluster, all kinds of things can go on in a cloud environment, that was fine yesterday, you could have a production cluster that's infected. So you have all of these new things. How do you figure out the gaps and the blind spots? That's what you guys do, I believe, what are the gaps in cloud security? Share with us. >> So definitely, you're completely correct. You know, I totally agree the cloud can be dramatically more secluded on-prem. At the end of the day, unlike an on-prem data center, where someone can can plug a new firewall, plug a new switch, change things. And if you don't instrument, it won't see what's inside. This is not possible in the cloud. In the cloud it's all code. It's all running on one infrastructure that can be used for the instrumentation. On the other hand, the cloud enabled businesses to act dramatically faster, by say dramatically, we're talking about order of magnitude faster, you can create new networks in matter of minutes, workloads can come and go within seconds. And this creates a lot of changes that simply haven't happened before. And it involves a lot of challenges, also from security instrumentation point of view. And you cannot use the same methodologies that you used for the on-prem because if you use them, you're going to lose, they were a compromise, that worked for certain physics, certain set of constraints that no longer apply. And our thesis is that essentially, you need to use the capabilities of the cloud itself, for the instrumentation of everything that can runs on the cloud. And when you do that, by definition, you have full coverage, because if it's run on the cloud, it can be instrumented on cloud, this essentially what Docker does. And you're able to have this full visibility for all of the risks and the importance because all of them, essentially filter workload, which we're able to analyze. >> What are some of the blind spots in the public cloud, for instance. I mean, that you guys are seeing that you guys point out or see with the software and the services that you guys have. >> So the most common ones are the things that we have seen in the last decades. I don't think they are materially different simply on steroids. We see things, services that are launched, nobody maintained for years, using things like improper segmentation, that everyone have permission to access everything. And therefore if one environment is breached, everything is breached. We see organization where something goes dramatically hardened. So people find a way to a very common thing is that, and now ever talks about CIM and the tightening their permission and making sure that every workload have only the capabilities that they need. But sometimes developers are a bit lazy. So they'll walk by that, but also have keys that are stored that can bypass the entire mechanism that, again, everyone can do everything on any environment. So at the end of the day, I think that the most common thing is the standard aging issues, making sure that your environment is patched, it's finger tightened, there is no alternative ways to go to the environment, at scale, because the end of the day, they are destined for security professional, you need to secure everything that they can just need to find one thing that was missed. >> And you guys provide that visibility into the cloud. So to identify those. >> Exactly. I think one of the top reasons that we implemented Orca using (indistinct) technology that I've invented, is essentially because it guarantees coverage. For the first time, we can guarantee you that if you scan it, that way, we'll see every instance, every workload, every container, because of its running, is a native workload, whether it's a Kubernetes, whether it's a service function, we see it all because we don't rely on any (indistinct) integration, we don't rely on friction within the organization. So many times in my career, I've been in discussion with customer that has been breached. And when we get to the core of the issue, it was, you couldn't, you haven't installed that agent, you haven't configured that firewall, the IPS was not up to date. So the protections weren't applied. So this is technically true, but it doesn't solve the customer problem, which is, I need the security to be applied to all of my environment, and I can't rely on people to do manual processes, because they will fail. >> Yeah, yeah. I mean, it's you can't get everything now and the velocity, the volume of activity. So let me just get this right, you guys are scanning container. So the risk I hear a lot is, you know, with Kubernetes, in containers is, a fully secure cluster could have a container come in with malware, and penetrate. And even if it's air gapped, it's still there. So problematic, you would scan that? Is that how it would work? >> So yes, but so for nothing but we are not scanning only containers, the essence of Orca is scanning the cloud environment holistically. We scan your cloud configuration, we scan your Kubernetes configuration, we scan your Dockers, the containers that run on top of them, we scan the images that are installed and we scan the permission that these images are one, and most importantly, we combined these data points. So it's not like you buy one solution that look to AWS configuration, is different solution that locate your virtual machines at one cluster, another one that looks at your cluster configuration. Another one that look at a web server and one that look at identity. And then you have resolved from five different tools that each one of them claims that this is the most important issue. But in fact, you need to infuse the data and understand yourself what is the most important items or they're correlated. We do it in an holistic way. And at the end of the day, security is more about thinking case graphs is vectors, rather than list. So it is to tell you something like this is a container, which is vulnerable, it has permission to access your sensitive data, it's running on a pod that is indirectly connected to the internet to this load balancer, which is exposed. So this is an attack vector that can be utilized, which is just a tool that to say you have a vulnerable containers, but you might have hundreds, where 99% of them are not exposed. >> Got it, so it's really more logical, common sense vectoring versus the old way, which was based on perimeter based control points, right? So is that what I get? is that right is that you're looking at it like okay, a whole new view of it. Not necessarily old way. Is that right? >> Yes, it is right, we are looking at as one problem that is entered in one tool that have one unified data model. And on top of that, one scanning technology that can provide all the necessary data. We are not a tool that say install vulnerability scanner, install identity access management tools and infuse all of the data to Orca will make sense, and if you haven't installed the tools to you, it's not our problem. We are scanning your environment, all of your containers, virtual machine serverless function, cloud configuration using guard technology. When standard risk we put them in a graph and essentially what is the attack vectors that matter for you? >> The sounds like a very promising value proposition. if I've workloads, production workloads, certainly in the cloud and someone comes to me and says you could have essentially a holistic view of your security posture at any given point in that state of operations. I'm going to look at it. So I'm compelled by it. Now tell me how it works. Is there overhead involved? What's the cost to, (indistinct) Australian dollars, but you can (indistinct) share the price to would be great. But like, I'm more thinking of me as a customer. What do I have to do? What operational things, what set up? What's my cost operationally, and is there overhead to performance? >> You won't believe me, but it's almost zero. Deploying Orca is literally three clicks, you just go log into the application, you give it the permission to read only permission to the environment. And it does the rest, it doesn't run a single awkward in the environment, it doesn't send a single packet. It doesn't create any overhead we have within our public customer list companies with a very critical workloads, which are time sensitive, I can quote some names companies like Databricks, Robinhood, Unity, SiteSense, Lemonade, and many others that have critical workloads that have deployed it for all of the environment in a very quick manner with zero interruption to the business continuity. And then focusing on that, because at the end of the day, in large organization, friction is the number one thing that kills security. You want to deploy your security tool, you need to talk with the team, the team says, okay, we need to check it doesn't affect the environment, let's schedule it in six months, in six months is something more urgent then times flybys and think of security team in a large enterprise that needs to coordinate with 500 teams, and make sure it's deployed, it can't work, Because we can guarantee, we do it because we leverage the native cloud capabilities, there will be zero impact. This allows to have the coverage and find these really weak spot nobody's been looking at. >> Yeah, I mean, this having the technology you have is also good, but the security teams are burning out. And this is brings up the cultural issue we were talking before we came on camera around the cultural impact of the security assessment kind of roles and responsibilities inside companies. Could you share your thoughts on this because this is a real dynamic, the people involved as a people process technology, the classic, you know, things that are impacted with digital transformation. But really the cultural impact of how developers push code, the business drivers, how the security teams get involved. And sometimes it's about the security teams are not under the CIO or under these different groups, all kinds of impacts to how the security team behaves in context to how code gets shipped. What's your vision and view on the cultural impact of security in the cloud. >> So, in fact, many times when people say that the cloud is not secure, I say that the culture that came with the cloud, sometimes drive us to non secure processes, or less secure processes. If you think about that, only a decade ago, if an organization could deliver a new service in a year, it would be an amazing achievement, from design to deliver. Now, if an organization cannot ship it, within weeks, it's considered a failure. And this is natural, something that was enabled by the cloud and by the technologies that came with the cloud. But it also created a situation where security teams that used to be some kind of a checkpoint in the way are no longer in that position. They're in one end responsible to audit and make sure that things are acting as they should. But on the other end, things happen without involvement. And this is a very, very tough place to be, nobody wants to be the one that tells the business you can't move as fast as you want. Because the business want to move fast. So this is essentially the friction that exists whether can we move fast? And how can we move fast without breaking things, and without breaking critical security requirements. So I believe that security is always about a triode, of educate, there's nothing better than educate about putting the guardrails to make sure that people cannot make mistakes, but also verify an audit because there will be failures in even if you educate, even if you put guardrails, things won't work as needed. And essentially, our position within this, triode is to audit, to verify to empower the security teams to see exactly what's happening, and this is an enabler for a discussion. Because if you see what are the risks, the fact that you have, you know, you have this environment that hasn't been patched for a decade with the password one to six, it's a different case, then I need you to look at this environment because I'm concerned that I haven't reviewed it in a year. >> That's exactly a great comment. You mentioned friction kills innovation earlier. This is one friction point that mismatch off cadence between ownership of process, business owners goals of shipping fast, security teams wanting to be secure. And developers just want to write code faster too. So productivity, burnout, innovation all are a factor in cloud security. What can a company do to get involved? You mentioned easy to deploy. How do I work with Orca? You guys are just, is it a freemium? What is the business model? How do I engage with you if I'm interested in deploying? >> So one thing that I really love about the way that we work is that you don't need to trust a single word I said, you can get a free trial of Orca at website orca.security, one a scan on your cloud environment, and see for yourself, whether there are critical ways that were overlooked, whether everything is said and there is no need for a tool or whether they some areas that are neglected and can be acted at any given moment (indistinct) been breached. We are not a freemium but we offer free trials. And I'm also a big believer in simplicity and pricing, we just price by the average number workload that you have, you don't need to read a long formula to understand the pricing. >> Reducing friction, it's a very ethos sounds like you guys have a good vision on making things easy and frictionless and sets that what we want. So maybe I should ask you a question. So I want to get your thoughts because a lot of conversations in the industry around shifting left. And that's certainly makes a lot of sense. Which controls insecurity do you want to shift left and which ones you want to shift right? >> So let me put it at, I've been in this industry for more than two decades. And like any industry every one's involved, there is a trend and of something which is super valuable. But some people believe that this is the only thing that you need to do. And if you know Gartner Hype Cycle, at the beginning, every technology is (indistinct) of that. And we believe that this can do everything and then it reaches (indistinct) productivity of the area of the value that it provides. Now, I believe that shifting left is similar to that, of course, you want to shift left as much as possible, you want things to be secure as they go out of the production line. This doesn't mean that you don't need to audit what's actually warning, because everything you know, I can quote, Amazon CTO, Werner Vogels about everything that can take will break, everything fails all the time. You need to assume that everything will fail all the time, including all of the controls that you baked in. So you need to bake as much as possible early on, and audit what's actually happening in your environment to find the gaps, because this is the responsibility of security teams. Now, just checking everything after the fact, of course, it's a bad idea. But only investing in shifting left and education have no controls of what's actually happening is a bad idea as well. >> A lot of people, first of all, great call out there. I totally agree, shift left as much as possible, but also get the infrastructure and your foundational data strategies, right and when you're watching and auditing. I have to ask you the next question on the context of the data, right, because you could audit all day long, all night long. But you're going to have a pile of needles looking for haystack of needles, as they say, and you got to have context. And you got to understand when things can be jumped on. You can have alert fatigue, for instance, you don't know what to look at, you can have too much data. So how do you manage the difference between making the developers productive in the shift left more with the shift right auditing? What's the context and (indistinct)? How do you guys talk about that? Because I can imagine, yeah, it makes sense. But I want to get the right alert at the right time when it matters the most. >> We look at risk as a combination of three things. Risk is not only how pickable the lock is. If I'll come to your office and will tell you that you have security issue, is that they cleaning, (indistinct) that lock can be easily picked. You'll laugh at me, technically, it might be the most pickable lock in your environment. But you don't care because the exposure is limited, you need to get to the office, and there's nothing valuable inside. So I believe that we always need to take, to look at risk as the exposure, who can reach that lock, how easily pickable this lock is, and what's inside, is at your critical plan tools, is it keys that can open another lock that includes this plan tools or just nothing. And when you take this into context, and the one wonderful thing about the cloud, is that for the first time in the history of computing, the data that is necessary to understand the exposure and the impact is in the same place where you can understand also the risk of the locks. You can make a very concise decision of easily (indistinct) that makes sense. That is a critical attack vector, that is a (indistinct) critical vulnerability that is exposed, it is an exposed service and the service have keys that can download all of my data, or maybe it's an internal service, but the port is blocked, and it just have a default web server behind it. And when you take that, you can literally quantize 0.1% of the alert, even less than that, that can be actually exploited versus device that might have the same severity scores or sound is critical, but don't have a risk in terms of exposure or business impact. >> So this is why context matters. I want to just connect what you said earlier and see if I get this right. What you just said about the lock being picked, what's behind the door can be more keys. I mean, they're all there and the thieves know, (indistinct) bad guys know exactly what these vectors are. And they're attacking them. But the context is critical. But now that's what you were getting at before by saying there's no friction or overhead, because the old way was, you know, send probes out there, send people out in the network, send packers to go look at things which actually will clutter the traffic up or, you know, look for patterns, that's reliant on footsteps or whatever metaphor you want to use. You don't do that, because you just wire up the map. And then you put context to things that have weights, I'm imagining graph technologies involved or machine learning. Is that right? Am I getting that kind of conceptually, right, that you guys are laying it out holistically and saying, that's a lock that can be picked, but no one really cares. So no one's going to pick and if they do, there's no consequence, therefore move on and focus energy. Is that kind of getting it right? Can you correct me where I got that off or wrong? >> So you got it completely right. On one end, we do the agentless deep assessment to understand your workloads, your virtual machine or container, your apps and service that exists with them. And using the site scanning technology that some people you know, call the MRI for the cloud. And we build the map to understand what are connected to the security groups, the load balancer, the keys that they hold, what these keys open, and we use this graph to essentially understand the risk. Now we have a graph that includes risk and exposure and trust. And we use this graph to prioritize detect vectors that matters to you. So you might have thousands upon thousands of vulnerabilities on servers that are simply internal and these cannot be manifested, that will be (indistinct) and 0.1% of them, that can be exploited indirectly to a load balancer, and we'll be able to highlight these one. And this is the way to solve alert fatigue. We've been in large organizations that use other tools that they had million critical alerts, using the tools before Orca. We ran our scanner, we found 30. And you can manage 30 alerts if you're a large organization, no one can manage a million alerts. >> Well, I got to say, I love the value proposition. I think you're bringing a smart view of this. I see you have the experience there, Avi and team, congratulations, and it makes sense of the cloud is a benefit, it can be leveraged. And I think security being rethought this way, is smart. And I think it's being validated. Now, I did check the news, you guys have raised significant traction as valuation certainly raised around the funding of (indistinct) 10 million, I believe, a (indistinct) Funding over a billion dollar valuation, pushes a unicorn status. I'm sure that's a reflection of your customer interaction. Could you share customer success that you're having? What's the adoption look like? What are some of the things customers are saying? Why do they like your product? Why is this happening? I mean, I can connect the dots myself, but I want to hear what your customers think. >> So definitely, we're seeing huge traction. We grew by thousands of percent year over year, literally where times during late last year, where our sales team, literally you had to wait two or three weeks till you managed to speak to a seller to work with Orca. And we see the reasons as organization have the same problems that we were in, and that we are focusing. They have cloud environments, they don't know their security posture, they need to own it. And they need to own it now in a way which guarantees coverage guarantees that they'll see the important items and there was no other solution that could do that before Orca. And this is the fact. We literally reduce deployment (indistinct) it takes months to minutes. And this makes it something that can happen rather than being on the roadmap and waiting for the next guy to come and do that. So this is what we hear from our customers and the basic value proposition for Orca haven't changed. We're providing literally Cloud security that actually works that is providing full coverage, comprehensive and contextual, in a seamless manner. >> So talk about the benefits to customers, I'll give you an example. Let's just say theCUBE, we have our own cloud. It's growing like crazy. And we have a DevOps team, very small team, and we start working with big companies, they all want to know what our security posture is. I have to go hire a bunch of security people, do I just work with Orca, because that's the more the trend is integration. I just was talking to another CEO of a hot startup and the platform engineering conversations about people are integrating in the cloud and across clouds and on premises. So integration is all about posture, as well, too I want to know, people want to know who they're working with. How does that, does that factor into anything? Because I think, that's a table stakes for companies to have almost a posture report, almost like an MRI you said, or a clean (indistinct) health. >> So definitely, we are both providing the prioritized risk assessment. So let's say that your cloud team want to check their security, the cloud security risk, they'll will connect Orca, they'll see the (indistinct) in a very, very clear way, what's been compromised (indistinct) zero, what's in an imminent compromise meaning the attacker can utilize today. And you probably want to fix it as soon as possible and things that are hazardous in terms that they are very risky, but there is no clear attack vectors that can utilize them today, there might be things that combining other changes will become imminent compromise. But on top of that, when standard people also have compliance requirements, people are subject to a regulation like PCI CCPA (indistinct) and others. So we also show the results in the lens of these compliance frameworks. So you can essentially export a report showing, okay, we were scanned by Orca, and we comply with all of these requirements of SOC 2, etc. And this is another value proposition of essentially not only showing it in a risk lens, but also from the compliance lens. >> You got to be always on with security and cloud. Avi, great conversation. Thank you for sharing nice knowledge and going deep on some of the solution and appreciate your conversation. Thanks for coming on. >> Thanks for having me. >> Obviously, you are CEO and co founder of Orca Security, hot startup, taking on security in the cloud and getting it right. I'm John Furrier with theCUBE. Thanks for watching. (calm music)

Published Date : May 18 2021

SUMMARY :

technology for the cloud, and are in the cloud and are but it is possible in the cloud. And I asked him, "Okay, how'd you do?" of everything that can runs on the cloud. I mean, that you guys are seeing So at the end of the day, And you guys provide that For the first time, we can guarantee you So the risk I hear a lot is, So it is to tell you something like So is that what I get? and infuse all of the data the price to would be great. And it does the rest, the classic, you know, I say that the culture What is the business model? about the way that we work is that and which ones you want to shift right? that you need to do. I have to ask you the next question is that for the first time that you guys are laying it out that some people you know, What are some of the things and the basic value proposition So talk about the in the lens of these and going deep on some of the solution taking on security in the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Orca SecurityORGANIZATION

0.99+

John FurrierPERSON

0.99+

OrcaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

Avi ShuaPERSON

0.99+

500 teamsQUANTITY

0.99+

May 2021DATE

0.99+

AWSORGANIZATION

0.99+

30 alertsQUANTITY

0.99+

99%QUANTITY

0.99+

RobinhoodORGANIZATION

0.99+

SiteSenseORGANIZATION

0.99+

hundredsQUANTITY

0.99+

0.1%QUANTITY

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AviPERSON

0.99+

SOC 2TITLE

0.99+

LemonadeORGANIZATION

0.99+

six monthsQUANTITY

0.99+

five different toolsQUANTITY

0.99+

yesterdayDATE

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

Werner VogelsPERSON

0.99+

UnityORGANIZATION

0.99+

three weeksQUANTITY

0.99+

three clicksQUANTITY

0.99+

one toolQUANTITY

0.99+

single packetQUANTITY

0.98+

one problemQUANTITY

0.98+

10 millionQUANTITY

0.98+

a decade agoDATE

0.98+

late last yearDATE

0.98+

theCUBEORGANIZATION

0.98+

bothQUANTITY

0.97+

CUBEORGANIZATION

0.97+

sixQUANTITY

0.97+

a yearQUANTITY

0.97+

30QUANTITY

0.97+

more than two decadesQUANTITY

0.97+

each oneQUANTITY

0.96+

one thingQUANTITY

0.96+

one clusterQUANTITY

0.96+

one environmentQUANTITY

0.96+

last decadesDATE

0.95+

KubernetesTITLE

0.95+

single wordQUANTITY

0.95+

singleQUANTITY

0.95+

thousands of percentQUANTITY

0.95+

todayDATE

0.94+

orca.securityORGANIZATION

0.94+

three thingsQUANTITY

0.93+

one solutionQUANTITY

0.92+

Gartner Hype CycleORGANIZATION

0.92+

TwitterORGANIZATION

0.91+

one endQUANTITY

0.91+

million critical alertsQUANTITY

0.91+

OneQUANTITY

0.9+

a decadeQUANTITY

0.89+

over a billion dollarQUANTITY

0.87+

zero impactQUANTITY

0.83+

million alertsQUANTITY

0.8+

DevOpsORGANIZATION

0.77+

theCUBE StudiosORGANIZATION

0.77+

Beth Davidson & Raj Behara, Agero | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Hello, everyone, and welcome back to the cubes. Continuing coverage of AWS reinvent 2020 Virtual the Cube Virtual. We're here covering the partner ecosystem and some of the new innovations coming from the reinvent community. Let's talk about something that anyone who drives a vehicle can relate to. Roadside assistance with me or Beth Davidson, chief marketing officer at a zero, and Raj borrows the vice president and c t o at zero folks, welcome to the Cube. >>Hello, nice to see you. >>So let's start with you. Maybe talk a little bit about your your mission, how you work with automakers. You've got, you know, a lot of good pipeline, their insurers and other others in the in the ecosystem. Tell us about the company. >>Absolutely. So for 50 years, we've been helping consumers with their cars. Um, that's what it comes down Thio. We know that one in three people has a roadside event every year on the way you think about that is, you know, if in three years you haven't had a roadside event, tick tock. You know, statistically, it's coming for you. We work with everybody. We work with the auto manufacturers. We work with the insurers. What we're trying to do is get closer to consumers. On the reason you may have never heard of a Gero is that's by design. Were white label. We work for our clients typically on. Do you know they trust us with their consumers? They trust us with their brands. Um, and we're just in the business of getting consumers back on the road. >>Thank you for that. So talk a little bit about how you approach this problem. I mean, you looked out roadside assistance, and you know, we can again all relate. Oh, am I up to date or at least the car? So there's gotta be some kind of 800 number in my glove compartment somewhere, right? So what was the state of roadside assistance before you guys got involved? And maybe we could get into sort of how you solve the problem. >>Yeah, I think that's a great question, Dave, as we look at roadside assistance, everyone things about picking up the phone number 800 number from the glove box compartment And over the years we have invested heavily on bringing a fully digital experience to our customers from insurance companies to AM. And when this Alexa opportunity came up earlier this summer, he said, Hi. How about taking that digital experience, adding, all the Alexa do goods goods about voice interaction, making it very interactive for the users to request that experience in a very normal consumer friendly, friendly were and brought that we integrated all those services got that whole uber like experience with for roadside assistance? >>Yeah. Now. So, Beth, you know, I reminded when, like the smart TV first came out, you had a type in right, and we're really getting spoiled now. It should be easy as a blink. Okay, so you're unveiling blink, you know, what's this service all about? >>So this service is about, you know, trying to get to consumers as easy as we can and getting removing the friction. Right? So what Rogers just talking about is again we asked consumers. We say, you know, imagine that tomorrow you went out and there was a flat tire on your car in your driveway. What do you dio? And universally, they pause and They're like, I don't know. I haven't thought about it, right. And then they start making up stuff. Like maybe I'm gonna go through the glove box. Maybe I'm going to go through my files. But wouldn't it be great if they could just kind of talked to the air and say, Alexa, what? Doe ideo and have it work for them, you know, And that's one friction. The second friction is consumers actually don't know their addresses or don't know it. Well, we joke around the office about the difference between saying you're on route one and Route one A is is the difference between 20 minutes of that tow truck getting to you in time. You know, these air points of friction that technology can help us with, you know, and then with payments even better, Right? So the fact that you can pay for this thing with Amazon pay and you don't have to worry about having cash for a driver or have a credit card. I mean, there's just so many points of friction that are reduced by using Alexa. >>Okay, so let's talk about the the integrations here in the technical aspects of how you put everything together and made it work, and we'll get into some of the cloud aspect >>Attack launched. We're asking users to tell what they want, and they can tell the whole address. They can get the address from the Alexa device. Or if it is Alexa Auto. The GPS will provide us the Latin belong. And we take that address and we get what kind of experience they want. Whether it is a flat tire, we're going to send somebody else to put despair. If it is a jump start, we're gonna put send somebody Thio jumps out the vehicle. So depending on that, we put pull all that information together, get this consent for the user to charge their an Amazon parrot card on profile, and then go So it's literally to come to sentences. And then we're on. We're on to sending you experience with some of the text messages that will allow you to truck tractor truck coming down to your driver. >>Now I'll show my age. So yeah, we've all I don't have all but I've been locked out of the car many times Now, in the old days, used to be able to get a coat hanger and pop it open. But so? So that people still get locked out of their cars. >>Yes, cars. More often than not, it's, you know, the key. Fob stopped working, right? Lost the battery of my key fob these days. But it's the equivalent. >>Alright, so All right, so right. What else do you guys do in the cloud? Do you use a W s for your own business? Maybe share with us some of >>the over the years. For the past 78 years, we have, uh, integrated and got all of our technologies into the AWS cloud. And we have now revamped and re innovated on top of those and create a new product lines. We have accident scene management. We do, um, handle automatic clash notifications for some of our partner customers. We dio dealer service appointments, so we do a lot of these things. And all of these are not possible without the amazing teams. 20 or so teams that we have across three continents working on 50 plus, uh, approved services on aws, uh, innovating around the clock, bringing these new innovations to our market. >>So, Beth, you were saying earlier that you, you know, want to reach out to the consumer. I mean, how do you market? Uh, you obviously go through through partners. And I'm curious system, What's your go to market and maybe how you're different from from others in the marketplace, >>right? Eso again because we're white label with most of the client side business that we do, we help our clients message better on DSO. We talked to them about how often you have to remind people that this isn't a one and done, um, on the skill store for Alexa. You know how we're different is you know, you don't aske much as I love the branding that we came up with blank roadside. You know, you don't actually have to use it. You don't have to say, Alexa, open my blank roadside. You could just say, Alexa, help me with my flat tire, which really helps cut out the fact that I actually need to market the brand like a traditional market or would have had Thio. But our biggest problem is how do you market something to someone in that moment of need, right? How do I How do I prime you to get you to think about it way, way before you ever actually have the problem. >>And how do you charge for the service? >>Eso It's it's a flat fee on did. It's better than what consumers would be able to get on their own. Or at least we believe so. But it is a flat fee for any kind of road service, so it's flat tire. It's dead batteries. It's winching you out. You know, it's it's all of those things. Um, that can happen to you that are just kind of those minor everyday mishaps. >>Okay? And so and so do I. How do I get it? Do I do I have tow hope that my you know, if I'm leasing a car that the auto has it, can I go direct? How doe I >>all direct? It's all direct. So you don't have to worry about an I d number membership number. You're just paying for it out of your Amazon account on. Do you know you don't have to worry about knowing your how many digit vin number. You know, none of that stuff. It's just one and done. >>Awesome. So, Raja, I wonder if you could talk a little bit about your your scale. Um, maybe I don't know if you can share any metrics and what What factors? The cloud generally and a W s specifically has has played and enabling that scale. >>Yeah, we have amazing number of integrations with our Fortune 100 insurance companies. Um, over 35 insurance companies and we have 100 and 70 b two b clients today, Um, and we integrate with them were deeply, um, uh integrated into the building systems into their coverage systems. And all of that is to be able to provide that sub minute sub second experience to our customers when they're calling in, uh, when they need the service. Um, right now we do over a billion AP A calls. As a result of these transactions, all these integrations or for quarter and all of these, uh, our third parties, service providers who go around the on the roads and provide this location information today off the tow trucks to us, all of these 8 8000 or so trucks extreme that information to us almost on every hour. So we bring all that information together on the AWS platform, stream it back shaded back in a very secure private manner back to the customers, right at the moment of need. >>Yeah, So I mean, without the cloud, you'd be backing up. You know, the servers to the truck to the loading dock. And it would just take so much longer toe spin up new products. I would imagine that you guys have a lot of ideas about new data products or new services that you can you can provide. Um, you probably I'm sure you can tell us what they are, But but in terms of the time, it takes you to conceive toe to get to the market. That must be impressed with the cloud. >>Yeah, it's a fraction of what it used to take years ago when we were not in AWS, right? And it also allows us to not to spend all this time on worrying about the same thing that you used to worry about for every project. Now you can actually think about how, what how you let be able to leverage new innovations that are coming in and actually improve improve the experience with some kind of intelligence that is added on, which makes the experience much smoother for people. >>Well, Beth will give you last word. But first of all, thanks for helping us make our lives even even better and more convenient. But bring us home. What's the last word here? >>So the last word is, you know, we dio we do 12 million events a year right now, right? And if you if you like math, it's 35,000 day. It's 20 for every minute, you know. And the work that that Rajan team have done to make the scalable means we're ready to do the next 12 million on. Do you know we know. We know there are consumers out there having those events. We just want to be there for you, you know, take care of that frustrating event on get you back >>on the road. Well, it's just, you know, having you there and being able to push a button and talk to a device is just It's a game changer. So thank you guys for coming on the cube and sharing your story really interesting. Yeah. All right. Thanks for watching. Keep it right there. You're watching the cubes coverage of aws reinvent 2020. We'll be right back right after this short break

Published Date : Dec 15 2020

SUMMARY :

It's the Cube with digital You've got, you know, a lot of good pipeline, their insurers On the reason you may have never heard of a Gero is that's by design. And maybe we could get into sort of how you solve the problem. And over the years we have invested heavily on bringing a fully digital experience you had a type in right, and we're really getting spoiled now. So the fact that you can pay for this thing with Amazon pay and you don't have to worry about having cash for a driver We're on to sending you experience with some of the text messages that will allow you to truck tractor in the old days, used to be able to get a coat hanger and pop it open. More often than not, it's, you know, the key. What else do you guys do in the cloud? innovating around the clock, bringing these new innovations to our market. I mean, how do you market? You know how we're different is you know, you don't aske much as I love the branding that Um, that can happen to you that are just kind of those minor everyday mishaps. my you know, if I'm leasing a car that the auto has it, can I go direct? So you don't have to worry about an I d number membership number. Um, maybe I don't know if you can share any metrics and what What factors? And all of that is to be able to provide that sub minute terms of the time, it takes you to conceive toe to get to the market. about the same thing that you used to worry about for every project. Well, Beth will give you last word. So the last word is, you know, we dio we do 12 million events a year right now, Well, it's just, you know, having you there and being able to push a button and talk to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

RajPERSON

0.99+

BethPERSON

0.99+

AmazonORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

Beth DavidsonPERSON

0.99+

20QUANTITY

0.99+

oneQUANTITY

0.99+

50 yearsQUANTITY

0.99+

Raj BeharaPERSON

0.99+

DavePERSON

0.99+

three yearsQUANTITY

0.99+

tomorrowDATE

0.99+

800OTHER

0.99+

RajaPERSON

0.99+

35,000 dayQUANTITY

0.98+

three peopleQUANTITY

0.98+

100QUANTITY

0.98+

todayDATE

0.98+

AlexaTITLE

0.98+

RogersPERSON

0.97+

50 plusQUANTITY

0.97+

second frictionQUANTITY

0.97+

IntelORGANIZATION

0.96+

8 8000QUANTITY

0.96+

one frictionQUANTITY

0.96+

AgeroPERSON

0.96+

over 35 insurance companiesQUANTITY

0.95+

12 millionQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.95+

Alexa AutoTITLE

0.95+

uberORGANIZATION

0.94+

12 million eventsQUANTITY

0.93+

Cube VirtualCOMMERCIAL_ITEM

0.92+

over a billionQUANTITY

0.9+

firstQUANTITY

0.89+

earlier this summerDATE

0.87+

three continentsQUANTITY

0.86+

ThioORGANIZATION

0.86+

LatinOTHER

0.83+

ideoORGANIZATION

0.83+

zeroQUANTITY

0.8+

70QUANTITY

0.78+

GeroORGANIZATION

0.76+

DSOORGANIZATION

0.76+

ThioLOCATION

0.75+

years agoDATE

0.71+

2020DATE

0.69+

every minuteQUANTITY

0.69+

a yearQUANTITY

0.67+

2020 Virtual theCOMMERCIAL_ITEM

0.63+

secondQUANTITY

0.59+

reinvent 2020EVENT

0.56+

oneLOCATION

0.55+

RajanPERSON

0.55+

past 78 yearsDATE

0.53+

routeOTHER

0.53+

payTITLE

0.52+

twoQUANTITY

0.51+

Route oneOTHER

0.51+

FortuneTITLE

0.32+

reinventEVENT

0.3+

InventEVENT

0.25+

Frank Keynote with Disclaimer


 

>>Hi, I'm Frank's Luqman CEO of Snowflake. And welcome to the Snowflake Data Cloud Summit. I'd like to take the next few minutes to introduce you to >>the data cloud on why it matters to the modern enterprise. As an industry, we have struggled to mobilize our data, meaning that has been hard to put data into service of our enterprises. We're not living in a data economy and for most data central how we run our lives, our businesses and our institutions, every single interaction we have now, whether it's in social media, e commerce or any other service, engagement generates critical data. You multiply this out with the number of actors and transactions. The volume is overwhelming, growing in leaps and bounds every day. There was a time when data operations focused mostly on running reports and populating dashboards to inform people in the enterprise of what had happened on what was going on. And we still do a ton of that. But the emphasis is shifting to data driving operations from just data informing people. There is such a thing as the time value off data meaning that the faster data becomes available, the more impactful and valuable it ISS. As data ages, it loses much of its actionable value. Digital transformation is an overused term in our industry, but the snowflake it means the end to end automation of business processes, from selling to transacting to supporting to servicing customers. Digital processes are entirely disinter mediated in terms of people. Involvement in are driven into end by data. Of course, many businesses have both physical and digital processes, and they are >>intertwined. Think of retail, logistics, delivery services and so on. So a data centric operating discipline is no longer optional data operations Air now the beating heart >>of the modern enterprise that requires a massively scalable data platform talented data engineering and data science teams to fully exploit the technology that now is becoming available. Enter snowflake. Chances are that, you know, snowflake as a >>world class execution platform for a diverse set of workloads. Among them data warehousing, data engineering, data, lakes, data, science, data applications and data sharing. Snowflake was architected from scratch for cloud scale computing. No legacy technology was carried forward in the process. Snowflake reimagined many aspects of data management data operations. The result was a cloud data platform with massive scale, blistering performance, superior economics and world class data governance. Snowflake innovated on a number of vectors that wants to deliver this breakthrough. First scale and performance. Snowflake is completely designed for cloud scale computing, both in terms of data volume, computational performance and concurrent workload. Execution snowflake features numerous distinct innovations in this category, but none stands up more than the multi cluster shared stories. Architectural Removing the control plane from the individual cluster led to a dramatically different approach that has yielded tremendous benefits. But our customers love about Snowflake is to spin up new workloads without limitation and provisioned these workloads with his little or as much compute as they see fit. No longer do they fear hidden capacity limits or encroaching on other workloads. Customers can have also scale storage and compute independent of each other, something that was not possible before second utility and elasticity. Not only can snowflake customer spin up much capacity for as long as they deem necessary. Three. Utility model in church, they only get charged for what they consumed by the machine. Second, highly granular measurement of utilization. Ah, lot of the economic impact of snowflake comes from the fact that customers no longer manage capacity. What they do now is focused on consumption. In snowflake is managing the capacity. Performance and economics now go hand in hand because faster is now also cheaper. Snowflake contracts with the public cloud vendors for capacity at considerable scale, which then translates to a good economic value at the retail level is, well, third ease of use and simplicity. Snowflake is a platform that scales from the smallest workloads to the largest data estates in the world. It is unusual in this offer industry to have a platform that controversy the entire spectrum of scale, a database technology snowflake is dramatically simple fire. To compare to previous generations, our founders were bent on making snowflake, a self managing platform that didn't require expert knowledge to run. The role of the Deba has evolved into snowflake world, more focused on data model insights and business value, not tuning and keeping the infrastructure up and running. This has expanded the marketplace to nearly any scale. No job too small or too large. Fourth, multi cloud and Cross Cloud or snowflake was first available on AWS. It now also runs very successfully on mark yourself. Azure and Google Cloud Snowflake is a cloud agnostic platform, meaning that it doesn't know what it's running on. Snowflake completely abstracts the underlying cloud platform. The user doesn't need to see or touch it directly and also does not receive a separate bill from the cloud vendor for capacity consumed by snowflake. Being multi cloud capable customers have a choice and also the flexibility to change over time snowflakes. Relationships with Amazon and Microsoft also allow customers to transact through their marketplaces and burned down their cloud commit with their snowflakes. Spend Snowflake is also capable of replicating across cloud regions and cloud platforms. It's not unusual to see >>the same snowflake data on more than one public cloud at the time. Also, for disaster recovery purposes, it is desirable to have access to snowflake on a completely different public cloud >>platform. Fifth, data Security and privacy, security and privacy are commonly grouped under the moniker of data governance. As a highly managed cloud data platform, snowflake designed and deploys a comprehensive and coherent security model. While privacy requirements are newer and still emerging in many areas, snowflake as a platform is evolving to help customers steer clear from costly violations. Our data sharing model has already enabled many customers to exchange data without surrendering custody of data. Key privacy concerns There's no doubt that the strong governance and compliance framework is critical to extracting you analytical value of data directly following the session. Police Stay tuned to hear from Anita Lynch at Disney Streaming services about how >>to date a cloud enables data governance at Disney. The world beat a >>path to our door snowflake unleashed to move from UN promised data centers to the public cloud platforms, notably AWS, Azure and Google Cloud. Snowflake now has thousands of enterprise customers averaging over 500 million queries >>today across all customer accounts, and it's one of the fastest growing enterprise software companies in a generation. Our recent listing on the New York Stock Exchange was built is the largest software AIPO in history. But the data cloth conversation is bigger. There is another frontier workload. Execution is a huge part of it, but it's not the entire story. There is another elephant in the room, and that is that The world's data is incredibly fragmented in siloed, across clouds of old sorts and data centers all over the place. Basically, data lives in a million places, and it's incredibly hard to analyze data across the silos. Most intelligence analytics and learning models deploy on single data sets because it has been next to impossible to analyze data across sources. Until now, Snowflake Data Cloud is a data platform shared by all snowflake users. If you are on snowflake, you are already plugged into it. It's like being part of a Global Data Federation data orbit, if you will, where all other data can now be part of your scope. Historically, technology limitations led us to build systems and services that siloed the data behind systems, software and network perimeters. To analyze data across silos, we resorted to building special purpose data warehouses force fed by multiple data sources empowered by expensive proprietary hardware. The scale limitations lead to even more silos. The onslaught of the public cloud opened the gateway to unleashing the world's data for access for sharing a monetization. But it didn't happen. Pretty soon they were new silos, different public clouds, regions within the and a huge collection of SAS applications hoarding their data all in their own formats on the East NC ations whole industries exist just to move data from A to B customer behavior precipitated the silo ing of data with what we call a war clothes at a time mentality. Customers focused on the applications in isolation of one another and then deploy data platforms for their workload characteristics and not much else, thereby throwing up new rules between data. Pretty soon, we don't just have our old Silas, but new wants to content with as well. Meanwhile, the promise of data science remains elusive. With all this silo ing and bunkering of data workload performance is necessary but not sufficient to enable the promise of data science. We must think about unfettered data access with ease, zero agency and zero friction. There's no doubt that the needs of data science and data engineering should be leading, not an afterthought. And those needs air centered on accessing and analyzing data across sources. It is now more the norm than the exception that data patterns transcend data sources. Data silos have no meaning to data science. They are just remnants of legacy computing. Architectures doesn't make sense to evaluate strictly on the basis of existing workloads. The world changes, and it changes quickly. So how does the data cloud enabled unfettered data access? It's not just a function of being in the public cloud. Public Cloud is an enabler, no doubt about it. But it introduces new silos recommendation by cloud, platform by cloud region by Data Lake and by data format, it once again triggered technical grandstands and a lot of programming to bring a single analytical perspective to a diversity of data. Data was not analytics ready, not optimized for performance or efficiency and clearly lacking on data governance. Snowflake, address these limitations, thereby combining great execution with great data >>access. But, snowflake, we can have the best of both. So how does it all work when you join Snowflake and have your snowflake account? You don't just >>avail yourself of unlimited stories. And compute resource is along with a world class execution platform. You also plug into the snowflake data cloud, meaning that old snowflake accounts across clouds, regions and geography are part of a single snowflake data universe. That is the data clouds. It is based on our global data sharing architectures. Any snowflake data can be exposed and access by any other snowflake user. It's seamless and frictionless data is generally not copied. Her moves but access in place, subject to the same snowflake governance model. Accessing the data cloth can be a tactical one on one sharing relationship. For example, imagine how retailer would share data with a consumer back. It's good company, but then it easily proliferate from 1 to 1. Too many too many. The data cloud has become a beehive of data supply and demand. It has attracted hundreds of professional data listings to the Snowflake Data Marketplace, which fuels the data cloud with a rich supply of options. For example, our partner Star Schema, listed a very detailed covert 19 incident and fatality data set on the Snowflake Data Marketplace. It became an instant hit with snowflake customers. Scar schema is not raw data. It is also platform optimize, meaning that it was analytics ready for all snowflake accounts. Snowflake users were accessing, joining and overlaying this new data within a short time of it becoming available. That is the power of platform in financial services. It's common to see snowflake users access data from snowflake marketplace listings like fax set and Standard and Poor's on, then messed it up against for example. Salesforce data There are now over 100 suppliers of data listings on the snowflake marketplace That is, in addition to thousands of enterprise and institutional snowflake users with their own data sets. Best part of the snowflake data cloud is this. You don't need to do or buy anything different. If your own snowflake you're already plugged into the data clouds. A whole world data access options awaits you on data silos. Become a thing of the past, enjoy today's presentations. By the end of it, you should have a better sense in a bigger context for your choices of data platforms. Thank you for joining us.

Published Date : Nov 19 2020

SUMMARY :

I'd like to take the next few minutes to introduce you to term in our industry, but the snowflake it means the end to end automation of business processes, So a data centric operating discipline is no longer optional data operations Air now the beating of the modern enterprise that requires a massively scalable data platform talented This has expanded the marketplace to nearly any scale. the same snowflake data on more than one public cloud at the time. no doubt that the strong governance and compliance framework is critical to extracting you analytical value to date a cloud enables data governance at Disney. centers to the public cloud platforms, notably AWS, Azure and Google Cloud. The onslaught of the public cloud opened the gateway to unleashing the world's data you join Snowflake and have your snowflake account? That is the data clouds.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Anita LynchPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DisneyORGANIZATION

0.99+

New York Stock ExchangeORGANIZATION

0.99+

Global Data FederationORGANIZATION

0.99+

AWSORGANIZATION

0.99+

secondQUANTITY

0.99+

todayDATE

0.99+

19 incidentQUANTITY

0.99+

SecondQUANTITY

0.99+

FourthQUANTITY

0.98+

bothQUANTITY

0.98+

over 500 million queriesQUANTITY

0.98+

Standard and PoorORGANIZATION

0.98+

Snowflake Data Cloud SummitEVENT

0.98+

over 100 suppliersQUANTITY

0.98+

Star SchemaORGANIZATION

0.98+

FifthQUANTITY

0.98+

Data LakeORGANIZATION

0.98+

ThreeQUANTITY

0.97+

SnowflakeORGANIZATION

0.97+

oneQUANTITY

0.97+

1QUANTITY

0.96+

SnowflakeTITLE

0.96+

Frank KeynotePERSON

0.95+

thousands of enterprise customersQUANTITY

0.95+

firstQUANTITY

0.95+

single snowflakeQUANTITY

0.91+

snowflakeTITLE

0.91+

thirdQUANTITY

0.9+

singleQUANTITY

0.9+

more than one public cloudQUANTITY

0.86+

thousands of enterpriseQUANTITY

0.84+

CloudTITLE

0.84+

Snowflake Data CloudTITLE

0.84+

FrankORGANIZATION

0.83+

single dataQUANTITY

0.83+

SalesforceORGANIZATION

0.81+

a million placesQUANTITY

0.8+

hundreds of professional data listingsQUANTITY

0.8+

AzureTITLE

0.78+

snowflake usersQUANTITY

0.78+

zeroQUANTITY

0.77+

zero frictionQUANTITY

0.74+

East NCLOCATION

0.73+

Scar schemaORGANIZATION

0.73+

First scaleQUANTITY

0.71+

Google Cloud SnowflakeTITLE

0.65+

UNORGANIZATION

0.63+

SilasTITLE

0.59+

SpendTITLE

0.51+

GoogleORGANIZATION

0.51+

Public CloudTITLE

0.49+

snowflakeEVENT

0.48+

LuqmanORGANIZATION

0.44+

AIPOORGANIZATION

0.43+

SASTITLE

0.42+

Power Panel | Commvault FutureReady


 

>>from around the globe. It's the Cube with digital coverage of CONMEBOL. Future ready 2020. Brought to you by combo. >>Hi and welcome back. I'm Stew Minuteman, and we're at the Cube's coverage of Con Volt Future Ready. You've got the power panel to really dig in on the product announcements that happened at the event today. Joining me? We have three guests. First of all, we have Brenda Rajagopalan. He's the vice president of products. Sitting next to him is Don Foster, vice president of Storage Solutions. And in the far piece of the panel Mersereau, vice president of Global Channels and Alliances. All three of them with Conn Volt. Gentlemen, thanks all three of you for joining us. Exactly. All right, so first of all, great job on the launch. You know, these days with a virtual event doing, you know, the announcements, the engagement with the press and analyst, you know, having demos, customer discussions. It's a challenge to put all those together. And it has been, you know, engaging in interesting watch today. So we're going to start with you. You've been quite busy today explaining all the pieces, so just at a very high level if you put this really looks like the culmination of the update with Conn Volt portfolio new team new products compared to kind of a year, year and 1/2 ago. So just if you could start us off with kind of the high points, >>thank you still, yeah, absolutely exciting day for us today. You did comrade multiple reasons for that excitement and go through that we announced an exciting new portfolio today knows to not the culmination. It's a continuation off our journey, a bunch of new products that we launched today Hyper scaler X as a new integrated data protection appliance. We've also announced new offerings in data protection, backup and recovery, disaster recovery and complete data protection and lots of exciting updates for Hedwig and a couple of weeks like we introduced updates for metallic. So, yes, it's been a really exciting pain. Also, today happens to be the data, and we got to know that we are the leader in Gartner Magic Quadrant for the ninth consecutive. I am so a lot of goodness today for us. >>Excellent. Lots of areas that we definitely want to dig deep in to the pieces done. You know, we just heard a little bit about Hedvig was an acquisition a year ago that everybody's kind of looking at and saying Okay, you know, will this make them compete against some of their traditional partners? How we get integrated in So, baby, just give us one level deeper on the Hedvig piece on what that means to the portfolio? Yeah, sure, So I >>guess I mean, one of the key things that the random mentioned was the fact that had hyper scale that's is built off the head Day files. So that's a huge milestone for us. As we teased out maybe 10 months ago. Remember, Tomball, Go on the Cube and talking about, you know, kind of what our vision and strategy was of unifying data and storage management. Those hyper hyper scale X applying is a definite milestone improving out that direction. But beyond just the hyper scale ECs, we've also been driving on some of the more primary or modern workloads such as containers and the really interesting stuff we've come out with your recently is the kubernetes native integration that ties in all of the advanced component of the head to distribute storage architecture on the platform itself across multi cloud and on premise environments, making it really easy and policy driven. Um, for Dev, ops users and infrastructure users, the tie ins applications from a group, Friction >>Great and Mercer. There's some updates to the partner program and help us understand how all of these product updates they're gonna affect the kind of the partnerships and alliances beasts that you want. >>Absolutely. So in the time since our last meeting that go in the fall, which is actually right after I had just doing combo, we spent a good portion of the following six months really talking with partners, understanding the understand the impact of the partner program that we introduced last summer, looking at the data and really looking at barriers to evolve the program, which fell around three difference specific. Once you bet one was simplicity of the simplicity of the program, simplicity of understanding, rewards, levers and so forth. The second was paying for value was really helping, helping our partners to be profitable around things like deal registration on other benefits and then third was around co investment. So making sure that we get the right members in place to support our partners and investing in practices. Another training, another enablement around combo and we launched in over these things last week is a part of an evolution of that program. Today is a great follow on because in addition to all of the program evolutions that we we launched last week now we have an opportunity with our partners to have many more opportunities or kind of a thin into the wedge to open up new discussions with our customers now around all of these different use cases and capabilities. So back to that simplification angle, really driving more and more opportunities for those partners toe specific conversations around use cases. >>Okay, for this next question, I think it makes sense for you to start. Maybe maybe Don, you can get some commentary in two. But when he's firstly the announcements, there are some new products in the piece that you discuss but trying to understand, you know, when you position it, you know, do you call the portfolio? Is it a platform? You know, if I'm an existing Conn Volt customer, you know, how do I approach this? If I use something like metallic, how does that interplay with some of the new pieces that were discussed today. >>Sure, I can take the business. I'm sure Don and mostly will have more data to it. The simplest way to think about it is as a port for you. But contrary to how you would think about portfolio as independent products, what we have is a set off data management services granular. We're very aligned to the use case, which can all inter operate with each other. So maybe launched backup and recovery and disaster recovery. These can be handled separately, purchased separately and deployed standalone or for customers who want a combination of those capabilities. We also have a complete data protection are fine storage optimization, data governance E discovery in complaints are data management services that build on top off any of these capabilities now a very differentiating factor in our platform owners. All the services that you're talking about are delivered off the same software to make it simpler to manage to the same year. So it's very easy to start with one service and then just turn on the license and go to other services so I can understand the confusion is coming from but it's all the same. The customer simplicity and flexibility in mind, and it's all delivered off the same platform. So it is a portfolio built on a single Don. Would you like to add more to it? >>Yeah, I think the interesting thing due to add on top of that is where we're going with Hedvig Infrastructure, the head of distributed storage platform, uh, to to run this point, how everything is integrated and feed and work off of one another. That's the same idea that we have. We talked about unifying data and storage manager. So the intricate storage architecture components the way data might be maneuvered, whether it's for kubernetes for virtual machines, database environments, secondary storage, you name it, um, we are. We're quickly working to continue driving that level of of unification and integration between the portfolio and heads storage, distribute storage platforms and also deliver. So what you're seeing today going back to, I think wrong his first point. It's definitely not the culmination. It's just another step in the direction as we continue to innovate and integrate this >>product, and I think for our partners what this really does, it allows them to sell around customer use cases because it'll ask now if I have a d. Our use case. I can go after just PR. If I have a backup use case, I can just go after backup, and I don't have to try to sell more than that. Could be on what the customer is looking for in parallel that we can steal these things in line with the customer use case. So the customer has a lot of remote offices. They want to scale Hedvig across those they want to use the art of the cloud. They can scale these things independently, and it really gives us a lot of optionality that we didn't have before when we had a few monolithic products. >>Excellent. Really reminds me more of how I look at products if I was gonna go buy it from some of the public cloud providers living in a hybrid cloud. World, of course, is what your customers are doing. Help us understand a little bit, you know, Mercer talked about metallic and the azure partnership, but for the rest of the products, the portfolio that we're talking about, you know, does this >>kind >>of work seamlessly across my own data center hosting providers Public Cloud, you know, how does this fit into the cloud environment for your customer? >>Yes, it does. And I can start with this one goes to, um it's our strategy is cloud first, right? And you see it in every aspect of our product portfolio. In fact, I don't know if you got to see a keynote today, but Ron from Johns Hopkins University was remarking that comment has the best cloud native architectures. And that's primarily because of the innovation that we drive into the multi cloud reality. We have very deep partnerships with pretty much all the cloud vendors, and we use that for delivering joint innovation, a few things that when you think of it from a hybrid customers perspective, the most important need for them is to continue working on pram while still leveraging the cloud. And we have a lot of optimization is built into that, and then the next step of the journey is of course, making sure that you can recover to the cloud would be it work load. Typically your data quality and there's a lot of automation that we provide to our solutions and finally, Of course, if you're already in the cloud, whether you're running a science parents or cloud native, our software protects across all those use cases, either true sass with metallic auto downloadable software, backup and recovery so we can cover the interest victims of actual presence. You. We do definitely help customers in every stage of their hybrid cloud acceleration journey. >>And if you take a look at the Hedvig protect if you take a look at the head back to, um, the ability to work in a cloud native fast, it is essentially a part of the DNA of that storage of the storage, right? So whether you're running on Prem, whether you're running it about adjacent, set up inside the cloud head, that can work with any compute environment and any storage environment that you went to essentially then feed, we build this distributed storage, and the reason that becomes important. It's pretty much highlighted with our announcement around the kubernetes and container support is that it makes it really easy to start maneuvering data from on Prem to the cloud, um, from cloud to cloud region to region, sort of that high availability that you know as customers make cloud first a reality and their organizations starts to become a critical requirement or ensuring the application of and some of the things that we've done now with kubernetes in making all of our integration for how we deliver storage for the kubernetes and container environments and being that they're completely kubernetes native and that they can support a Google in AWS and Azure. And of course, any on premises community set up just showcases the value that we can provide in giving them that level of data portability. And it basically provides a common foundation layer, or how any sort of the Dev ops teams will be operating in the way that those state full container state workloads. Donna Oh, sorry. Go >>ahead, mark area >>because you mentioned the metallic and azure partnership announcement and I just want to get on that. And one thing that run dimension, which is we are really excited about the announcement of partnership with Microsoft and all the different news cases that opens up that are SAS platform with Azure with office 3 65 and all of the great application stack it's on. If you're at the same time, to run this point. We are a multi cloud company. And whether that is other of the hyper scale clouds Mess GC, P. Ali at Oracle and IBM, etcetera, or Oliver, Great service writer burners. We continue to believe in customer choice, and we'll continue to drive unique event innovations across all of those platforms. >>All right, Don, I was wondering if we could just dig in a little bit more on some other kubernetes pieces you were talking about. Let me look at just the maturation of storage in general. You know, how do we had state back into containers in kubernetes environments? Help us see, You know what you're hearing from your customers. And you know how you how you're ready to meet their needs toe not only deliver storage, but as you say, Really? You know, full data protection in that environment? >>Certainly it So I mean, there's been a number of enhancements that happened in the kubernetes environment General over the last two years. One of the big ones was the creation of what the visit environment calls a persistent volume. And what that allows you to do is to really present storage to a a communities application. Do it be typically through what's called a CSR container storage interface that allows for state full data to be written, storage and be handled and reattached applications as you leverage them about that kubernetes. Um, as you can probably imagine that with the addition of the additional state full applications, some of the overall management now of stateless and state collapse become very talent. And that's primarily because many customers have been using some of the more traditional storage solutions to try to map that into these new state. Full scenario. And as you start to think about Dev ops organization, most Dev ops organizations want to work in the environment of their choice. Whether that's Google, whether that's AWS, Microsoft, uh, something that might be on Prem or a mix of different on Prem environments. What you typically find, at least in the kubernetes world, is there's seldom ever one single, very large kubernetes infrastructure cluster that's set to run, Dev asked. The way and production all at once. You usually have this spread out across a fairly global configuration, and so that's where some of these traditional mechanisms from traditional storage vendors really start to fall down because you can apply the same level of automation and controls in every single one of those environments. When you don't control the storage, let's say and that's really where interfacing Hedvig and allowing that sort of extension distribute storage platform brings about all of this automation policy control and really storage execution definition for the state. Full statehood workloads so that now managing the stateless and the state full becomes pretty easy and pretty easy to maintain when it comes to developing another Dev branch or simply trying to do disaster recovery or a J for production, >>any family actively do. That's a very interesting response, and the reality is customers are beginning to experiment with business. Very often they only have a virtual environment, and now they're also trying to expand into continuous. So Hedwig's ability to service primary storage for virtualization as well as containers actually gives their degree of flexibility and freedom for customers to try out containers and to start their contingent. Thank you familiar constructs. Everything is mellow where you just need to great with continuous >>Alright, bring a flexibility is something that I heard when you talk about the portfolio and the pricing as to how you put these pieces together. You actually talked about in the presentation this morning? Aggressive pricing. If you talk about, you know, kind of backup and recovery, help us understand, You know, convo 2020 how you're looking at your customers and you know how you put together your products, that to meet what they need at that. As you said, aggressive pricing? >>Absolutely. And you use this phrase a little bit earlier is to blow like flexibility. That's exactly what we're trying to get to the reason why we are reconstructing our portfolio so that we have these very granular use case aligned data management services to provide the cloud like flexibility. Customers don't have the same data management needs all the time. Great. So they can pick and choose the exact solution that need because there are delivered on the same platform that can enable out the solution investment, you know, And that's the reality. We know that many of our customers are going to start with one and keep adding more and more services, because that's what we see as ongoing conversations that gives us the ability to really praise the entry products very aggressively when compared to competition, especially when we go against single product windows. This uses a lot of slammed where we can start with a really aggressively priced product and enable more capabilities as we move forward to give you an idea, we launched disaster recovery today. I would say that compared to the so the established vendors India, we would probably come in at about 25 to 40% of the Priceline because it depends on the environment and what not. But you're going to see that that's the power of bringing to the table. You start small and then depending on what your needs are, you have the flexibility to run on either. More data management capabilities are more workloads, depending on what your needs will be. I think it's been a drag from a partner perspective, less with muscle. If you want a little bit more than that, >>yes, I mean, that goes back to the idea of being ableto simply scale across government use functionality. For example, things like the fact that our disaster recovery offering the Newman doesn't require backup really allows us to have those Taylor conversations around use cases, applications >>a >>zealous platforms. You think about one of the the big demands that we've had coming in from customers and partners, which is help me have a D R scenario or a VR set up in my environment that doesn't require people to go put their hands on boxes and cables, which was one of those things that a year ago we were having. This conversation would not necessarily have been as important as it is now, but that ability to target those specific, urgent use cases without having to go across on sort of sell things that aren't necessarily associated with the immediate pain points really makes those just makes us ineffective. Offer. >>Yeah, you bring up some changing priorities. I think almost everybody will agree that the number one priority we're hearing from customers is around security. So whether I'm adopting more cloud, I'm looking at different solutions out there. Security has to be front and center. Could we just kind of go down the line and give us the update as to how security fits and all the pieces we've been discussing? >>I guess I'm talking about change, right, so I'll start. The security for us is built into everything that we do the same view you're probably going to get from each of us because security is burden. It's not a board on, and you would see it across a lot of different images. If you take our backup and recovery and disaster recovery, for instance, a lot of ransomware protection capabilities built into the solution. For instance, we have anomaly detection that is built into the platform. If we see any kind of spurious activity happening all of a sudden, we know that that might be a potential and be reported so that the customer can take a quick look at air Gap isolation, encryption by default. So many features building. And when you come to disaster recovery, encryption on the wire, a lot of security aspects we've been to every part of the portfolio don't. >>Consequently, with Hedvig, it's probably no surprise that when that this platform was developed and as we've continued development, security has always been at the core of what we're doing is stored. So what? It's for something as simple as encryption on different volume, ensuring the communication between applications and the storage platform itself, and the way the distributors towards platform indicates those are all incredibly secured. Lock down almost such for our own our own protocols for ensuring that, um, you know, only we're able to talk within our own, our own system. Beyond that, though, I mean it comes down to ensure that data in rest data in transit. It's always it's always secure. It's also encrypted based upon the level of control that using any is there one. And then beyond just the fact of keeping the data secure. You have things like immutable snapshots. You have declared of data sovereignty to ensure that you can put essentially virtual fence barriers for where data can be transported in this highly distributed platform. Ah, and then, from a user perspective, there's always level security for providing all seeking roll on what groups organization and consume storage or leverage. Different resource is the storage platform and then, of course, from a service provider's perspective as well, providing that multi tenanted access s so that users can have access to what they want when they want it. It's all about self service, >>and the idea there is that obviously, we're all familiar with the reports of increased bad actors in the current environment to increased ransomware attacks and so forth. And be a part of that is addressed by what wrong and done said in terms of our core technology. Part of that also, though, is addressed by being able to work across platforms and environments because, you know, as we see the acceleration of state tier one applications or entire data center, evacuations into service provider or cloud environments has happened. You know, this could have taken 5 10 years in a in a normal cycle. But we've seen this happen overnight has cut this. Companies have needed to move those I T environments off science into managed environments and our ability to protect the applications, whether they're on premises, whether they're in the cloud or in the most difficult near where they live. In both cases, in both places at once, is something that it's really important to our customers to be able to ensure that in the end, security posture >>great Well, final thing I have for all three of you is you correctly noted that this is not the end, but along the journey that you're going along with your customers. So you know, with all three of you would like to get a little bit. Give us directionally. What should we be looking at? A convo. Take what was announced today and a little bit of look forward towards future. >>Directionally we should be looking at a place where we're delivering even greater simplicity to our customers. And that's gonna be achieved through multiple aspects. 1st 1 it's more technologies coming together. Integrating. We announced three important integration story. We announced the Microsoft partnership a couple of weeks back. You're gonna see us more longer direction. The second piece is technology innovation. We believe in it. That's what Differentiators has a very different company and we'll continue building it along the dimensions off data awareness, data, automation and agility. And the last one continued obsession with data. What more can we do with it? How can we drive more insights for our customers We're going to see is introducing more capabilities along those dimensions? No. >>And I think Rhonda tying directly into what you're highlighting there. I'm gonna go back to what we teased out 10 months ago at calm Bolt. Go there in Colorado in this very on this very program and talk about how, in the unification of ah ah, data and storage management, that vision, we're going to make more and more reality. I think the, uh, the announcements we've made here today let some of the things that we've done in between the lead up to this point is just proof of our execution. And ah, I can happily and excitedly tell you, we're just getting warmed up. It's going to be, ah, gonna be some fun future ahead. >>And I think studio in the running that out with the partner angle. Obviously, we're going to continue to produce great products and solutions that we're going to make our partners relevant. In those conversations with customers, I think we're also going to continue to invest in alternative business models, services, things like migration services, audit services, other things that build on top of this core technology to provide value for customers and additional opportunities for our partners >>to >>build out their their offerings around combo technologies. >>All right, well, thank you. All three of you for joining us. It was great to be able to dig in, understand those pieces. I know you've got lots of resources online for people to learn more. So thank you so much for joining us. Thank you too. Thank you. Alright, and stay with us. So we've got one more interview left for the Cube's coverage of con vault. Future Ready, students. Mannan. Thanks. As always for watching the Cube. Yeah, Yeah, yeah, yeah, yeah, yeah

Published Date : Jul 21 2020

SUMMARY :

Brought to you by combo. You've got the power panel to really dig in on the product announcements that happened a bunch of new products that we launched today Hyper scaler X as a new integrated ago that everybody's kind of looking at and saying Okay, you know, will this make them compete against guess I mean, one of the key things that the random mentioned was the fact that had hyper how all of these product updates they're gonna affect the kind of the partnerships and alliances beasts that you So making sure that we get the right members in place to support our partners and investing in products in the piece that you discuss but But contrary to how you would think about portfolio as It's just another step in the direction as we continue to innovate So the customer has a lot of remote offices. but for the rest of the products, the portfolio that we're talking about, you know, And that's primarily because of the innovation that we drive into the multi cloud reality. critical requirement or ensuring the application of and some of the things that we've done now with kubernetes about the announcement of partnership with Microsoft and all the different news cases ready to meet their needs toe not only deliver storage, but as you say, Really? One of the big ones was the creation of what the visit environment and the reality is customers are beginning to experiment with business. the pricing as to how you put these pieces together. the same platform that can enable out the solution investment, you know, And that's the reality. offering the Newman doesn't require backup really allows us to have those Taylor conversations around use cases, have been as important as it is now, but that ability to target those specific, all the pieces we've been discussing? And when you come to disaster recovery, encryption on the wire, a lot of security aspects we've You have declared of data sovereignty to ensure that you can put essentially virtual fence barriers for where and the idea there is that obviously, we're all familiar with the reports of increased So you know, with all three of you would like to get a little bit. And the last one continued obsession with data. I'm gonna go back to what we And I think studio in the running that out with the partner angle. So thank you so much for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brenda RajagopalanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Stew MinutemanPERSON

0.99+

Don FosterPERSON

0.99+

last weekDATE

0.99+

AWSORGANIZATION

0.99+

ColoradoLOCATION

0.99+

RonPERSON

0.99+

DonnaPERSON

0.99+

second pieceQUANTITY

0.99+

Conn VoltORGANIZATION

0.99+

twoQUANTITY

0.99+

six monthsQUANTITY

0.99+

MannanPERSON

0.99+

TodayDATE

0.99+

thirdQUANTITY

0.99+

GoogleORGANIZATION

0.99+

three guestsQUANTITY

0.99+

Johns Hopkins UniversityORGANIZATION

0.99+

a year agoDATE

0.99+

5 10 yearsQUANTITY

0.99+

todayDATE

0.99+

10 months agoDATE

0.99+

one serviceQUANTITY

0.99+

RhondaPERSON

0.98+

a year agoDATE

0.98+

MercerORGANIZATION

0.98+

secondQUANTITY

0.98+

FrictionORGANIZATION

0.98+

threeQUANTITY

0.98+

HedvigORGANIZATION

0.98+

office 3 65TITLE

0.98+

HedwigORGANIZATION

0.98+

2020DATE

0.98+

DonPERSON

0.97+

OneQUANTITY

0.97+

last summerDATE

0.97+

first pointQUANTITY

0.97+

oneQUANTITY

0.97+

eachQUANTITY

0.97+

Global Channels and AlliancesORGANIZATION

0.97+

singleQUANTITY

0.96+

both casesQUANTITY

0.96+

Hedvig InfrastructureORGANIZATION

0.96+

Storage SolutionsORGANIZATION

0.96+

both placesQUANTITY

0.95+

Future ReadyORGANIZATION

0.95+

IndiaLOCATION

0.94+

40%QUANTITY

0.93+

about 25QUANTITY

0.92+

MersereauPERSON

0.91+

FirstQUANTITY

0.91+

Mess GCORGANIZATION

0.9+

CONMEBOLORGANIZATION

0.88+

PricelineORGANIZATION

0.88+

firstQUANTITY

0.87+

this morningDATE

0.87+

P. AliPERSON

0.86+

single productQUANTITY

0.85+

CubeCOMMERCIAL_ITEM

0.84+

last two yearsDATE

0.83+

PremORGANIZATION

0.83+

OliverORGANIZATION

0.83+

Patrick Smith, Pure Storage & Eric Greffier, Cisco | Cisco Live EU Barcelona 2020


 

>> Announcer: Live from Barcelona, Spain, it's theCUBE! Covering Cisco Live 2020. Brought to you by Cisco and its ecosystem partners. >> Welcome back, this is theCUBE's live coverage of Cisco Live 2020, here in Barcelona. Our third year of the show, over 17,000 in attendance between the Cisco people, their large partner ecosystem, and the customers, I'm Stu Miniman, my cohost for this segment is Dave Vellante. John Furrier's scouring the show for all of the news at the event, and joining us, we have two first time guests on the program, first, sitting to my left is Patrick Smith, who is the field CTO for EMEA with Pure Storage. Sitting to his left is Eric Greffier, who is the managing director of EMEAR specialists with Cisco, so you have a slightly larger region than Patrick does, gentlemen, thanks so much for joining us. >> Patrick: Great to be here. >> All right, so, we know this show, we were talking that broad ecosystem, and of course Cisco in the data center group has very strong storage partnerships, highlighted by their converged infrastructure stacks. I wrote my research many many years ago, Cisco's brilliant job was when they entered the server market, they made sure that that fragmented storage ecosystem, they made partnerships across the board. And of course, when Pure's ascendancy with the flash era made the stack, so helping to paint those data centers orange with your Cisco partnership, so Patrick, give us the update here, 2020, what's interesting and important to know about Pure Storage and Cisco customer base? >> You know, we continue to see significant adoption of FlashStack, our converged infrastructure with Cisco. Driving just great interest and great growth, both for Pure and for Cisco with the UCS platform, and the value that the customers see in FlashStack, bringing together storage, networking and compute together with overall automation of the stack, and that really gives customers fantastic time to value. And that's what they're looking for in this day and age. >> All right, and Eric, what differentiates the partnership with Pure, versus, as you said, you do work with many of the storage companies out there. >> Well, we had a baby together, it was called FlashStack, and it was couple of years ago now, and as you said, I think the key element for us is really to have those CVDs, those Cisco Validated Designs together, and FlashStack was a great addition to our existing partnership at that time, talking about a couple of years ago. And of course, with the flash technology of Pure, we've seen the demand that we'd say going and going, and it has been amazing, amazing trajectory together. >> But talk a little bit more about the CVDs, the different use cases that you're seeing. You don't have to go through all 20, but maybe pick a couple of your favorite children. >> Well, just to make sure that people understand what CVD means, it's Cisco Validated Design, and this is kind of an outcome in the form of a document, which is available for customers and partners, which is the outcome of the partnership from R&D to R&D, which is just telling customers and partners what they need to order and have in it to fit all of this together for a specific business outcome. And the reason why we have multiple CVDs, is we have one CVD per use case. So the more use cases we have together, the more the CVD's precise, and you just have to follow the CVD design principles. Of course, the later swarms, and maybe Patrick can say a word, but we've been of course doing things regarding analytics and AI, because this is a big demand right now, so maybe Patrick, you want to say a word on this. >> Yeah, you guys were first with the AI and bringing AI and storage together with your partnership with Nvidia, so maybe double down on that. >> The FlashBlade was our move into building a storage platform for AI and model analytics, and we've seen tremendous success with that in lots of different verticals. And so with Cisco we launched FlashStack for AI, which brings together FlashBlade networking, and Cisco's fantastic compute platform with capability for considerable scale of Nvidia GPUs. So an in-a-box capability to really deliver fast time to market solutions for the growing world of analytics and modern AI, people want quick insight into the vast amounts of data we have, and so FlashStack for AI is really important for us being able to deliver as part of the Cisco ecosystem, and provide customers with a platform for success. >> What's happening with modernization, generally, but specifically in Europe, obviously Cisco, long history in Europe, Pure, you've got a presence here, good presence, but obviously much newer. Larger proportion, far larger proportion is in North America, so it's a real opportunity for you guys. What are you seeing in terms of modernization of infrastructure, and apps in the European community? >> Modernization I think is particularly important, and it's more and more seen under the guise of digital transformation, because investing in infrastructure just doesn't get the credit that sometimes it deserves. But the big push there is really all around simpler infrastructure, easier management, and the push for automation. Organizations don't want to have large infrastructure support teams who are either installing or managing in a heavy touch way, their environments, and so the push towards automation, not just at the infrastructure layer, but all the way up the stack, is really key. And you know, we were talking earlier, behind us we have the DevNet sessions here, all about how customers of Cisco and by correlation Pure, can really optimize the management to their environment, use technology like Intersight, like Ansible and others, to really minimize the overhead of managing technology, deliver services faster to customers and be more agile, in this always-on world that we live in, there's no time to really add a human to the cycle of managing infrastructure. >> I think we've been very proud over the years because this notion of converged infrastructure, which was, the promise was to simplify and modernize the data centers, before it was like, "Everything needs to get connected to anything," and coming was this notion of a pod, everything converged, "We've done the job for you, mister customer, "just think about adding some pod." This has been the promise for the last 10 years, and we've been very proud, almost to have created this market, but it wouldn't have been possible without the partnership with the storage players, and with Pure, we've been one step further in terms of simplifying things for customers. >> I love the extension you're talking about, because absolutely converged infrastructure was supposed to deliver on that simplicity, and it was, let's think of the entire rack as a unit of how we manage it, but with today's applications, with the speed of change happening in the environment, we've gone beyond human speed, and so therefore if we don't have the automation that you were talking about, we can't keep up with what the business needs to be able to do there. >> Yeah, that's what it's all about, it's the rapid rate of change. Whether it's business services, whether it's supporting developers in the developer environment, more and more our customers are becoming software development organizations, their developers are a key resource, and making them as efficient as possible is really important, so being able to quickly spin up development environments, new environments for developers, using snapshot technology, giving them the latest sets of data to test their applications on, is really central to enabling and empowering the developer. >> You know, you talk about Cisco's play and kind of creation of the converged infrastructure, Mark, and I think that's fair, by the way. Others may claim it, but I think the mantle goes to you. But there were two friction points, or headwinds, that we pointed out early in the day, the first was organizational, the servers team, the storage team, the network team didn't speak together, then the practitioner told us one day, "Look, you want to solve that problem, "put it in and watch what happens." 'Cause if you try to figure out the organization you'll never get there, and that sort of took care of itself. The other was the channel. The channel likes things separate, they can add value, they have this sort of box selling mentality, so I wonder if you could update us on what the mindset is in the channel, and how that's evolved. >> Yeah, it's a great question. I think the channel actually really likes the simplicity of a converged infrastructure to sell, it's a very simple message, and it really empowers the channel to take, to your point about organization, they have the full stack, all in one sellable item, and so they don't have to fight for the different components, it's one consistent unit that they sell as a whole, and so I think it simplifies the channel, and actually, we find that customers are actively seeking out, it's shown by our growth with FlashStack that customers are actually seeking out the channel partners who are selling FlashStack. >> Yeah, and do you think the channel realizes, "Wow, we really do have to go up the stack, "add more value, do things like partner with"? >> Well for most of the partners, they were heavily specialized on storage or compute or network, so for most of them, supporting the converged infrastructure was to be able to put a foot into another market, which was an expansion for them, which was part number one. Part number two, maybe the things that we've been missing, because since the beginning we had APIs around all those platforms. I don't believe in the early days, I'm talking about five years from now, that they got, that they could really really build something upon the converged infrastructure. Now, if you go through the DevNet area here at Cisco Live, you will see that I think this is the time now for them to understand, and really build new services on top of it, so I believe the value for the channel is pretty obvious now, more than ever. >> Well yeah, it's a great point, you don't usually hear converged infrastructure and infrastructure as code in the same conversation, but the maturation of the platforms underneath are bringing things together. >> They really are, in the same way that IT organizations are freeing up more time to focus up the stack on automation and added value, the same is true of the partners. It's interesting the corollary between the two. >> So I have a question on your act two, so what got us here the last 10 years, both firms were disruptors. Cisco came in and disrupted the compute space, it was misunderstood, "Cisco getting into servers, "that'll never work!" "Well, really not getting into servers, "we're changing the game." "Ah, okay," 10 years later. Pure, all-flash, really created some havoc in the industry, injected a ton of flash into the data center, practically drove a truck through the legacy business. Okay, so very successful. What's act two for you guys, what do you envision, disruptors, are you more incrementalists, I'd love to hear your thoughts on that. >> I start, Patrick. Probably for us, phase two is what you heard yesterday morning, I think Liz Anthony did a great speech regarding Cisco Intersight Workload Optimizer, sorry for the name, this is a bit long, but what it means is now we truly connect the infrastructure to the application performance, and the fact that we can place and discuss about converged infrastructure but in the context of what truly matters for customers, which is application, this is the first time ever you're going to see such amount of R&D put into bringing the two worlds together. So this is just the beginning, but I think this was probably for me yesterday one of the most important announcement ever. And by the way, Pure is coming with this announcement, so if you as a customer buy Cisco Intersight Workload Optimizer, you'll get everything you need to know about Pure and if you have to move things around the storage area, you know the tool will be doing it for you. So we are really the two of us in this announcement, so Patrick, if you want to? >> No, I mean as Eric mentioned, Intersight's important for Cisco, it's important for us, we're very proud to be early integrators as a third party into Intersight to allow that simple management, but you know, as you talk about the future, we were viewed as disruptors when we first came to market with flash array, and we consider still ourselves to be disruptors and innovators, and the amount of our revenue that we invest in innovation, in what is a really focused product portfolio, I think is showing benefits, and you've seen the announcements over the last six months or so with FlashArray//C, bringing all the benefits of flash to tier two applications, and just the interest that that has generated is huge. In the world of networking with NVMe, we have a fabric in RoCEv2, just increasing the performance for business applications that will have fantastic implications for things like SAP, time and performance-critical databases, and then what we announced with direct memory with adding SCM as a read cache onto flash array as well. Really giving customers investment protection for what they bought from us already, because they can, as you well know, Evergreen gives customers an asset that continues to appreciate in value, which is completely the opposite. >> And you're both sort of embracing that service consumption model, I mean Cisco's becoming a very large proportion of your business, you guys have announced some actual straight cloud plays, you've built an aray inside of AWS, which is pretty innovative, so. >> Yes, and as well as the cloud play with Cloud Block Store in AWS, there's Pure as a service, which takes that cloud-like consumption model and allows a customer to run it in their own data center without owning the assets, and that's really interesting, because customers have got used to the cloud-like consumption model, and paying as an OpEx rather than CapEx, and so bringing that into their own facility, and only paying for the data you have written, really does change the game in terms of how they consume and think about their storage environments. >> Patrick, we'd just love to get your viewpoint, you've been talking to a lot of customers this week, you said you've been checking out the DevNet zone, for people that didn't make it to the show here, what have they been missing, what would their peers be telling them in the hallway conversations? >> There's a huge amount as we've been talking about, there's a huge amount on automation, and actually we see it as we go into customers, the number of people we're now talking to who are developers but not developers developing business applications but developers developing code for managing infrastructure is key, and you see it all around the DevNet zone. And then, the focus on containers, I've been talking about it for a long time, and containers is so important for enterprises going forward. We have a great play in that space, and I think as we roll forward, the next three to five years, containers is just going to be the important technology that will be prevalent across enterprises large and small. >> Dave: Yeah, we agree. >> Eric and Patrick, thank you so much for giving us the update, congratulations on all the progress and definitely look forward to keeping an eye on your progress. >> Thanks very much. >> All right, Dave Vellante and I will be back with much more here from Cisco Live 2020 in Barcelona, thanks for watching theCUBE. (techno music)

Published Date : Jan 29 2020

SUMMARY :

Brought to you by Cisco and its ecosystem partners. and the customers, I'm Stu Miniman, and of course Cisco in the data center group and the value that the customers see in FlashStack, with Pure, versus, as you said, and as you said, I think the key element for us the different use cases that you're seeing. the more the CVD's precise, and you just have to follow and bringing AI and storage together and we've seen tremendous success with that and apps in the European community? and so the push towards automation, the data centers, before it was like, the automation that you were talking about, in the developer environment, and kind of creation of the converged infrastructure, the channel to take, to your point about organization, because since the beginning we had APIs and infrastructure as code in the same conversation, They really are, in the same way Cisco came in and disrupted the compute space, and the fact that we can place and discuss and just the interest that that has generated is huge. you guys have announced some actual straight cloud plays, and only paying for the data you have written, the next three to five years, Eric and Patrick, thank you so much with much more here from Cisco Live 2020 in Barcelona,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric GreffierPERSON

0.99+

Dave VellantePERSON

0.99+

PatrickPERSON

0.99+

CiscoORGANIZATION

0.99+

EuropeLOCATION

0.99+

NvidiaORGANIZATION

0.99+

EricPERSON

0.99+

BarcelonaLOCATION

0.99+

Stu MinimanPERSON

0.99+

Liz AnthonyPERSON

0.99+

Patrick SmithPERSON

0.99+

John FurrierPERSON

0.99+

North AmericaLOCATION

0.99+

twoQUANTITY

0.99+

DavePERSON

0.99+

EvergreenORGANIZATION

0.99+

IntersightORGANIZATION

0.99+

2020DATE

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

yesterday morningDATE

0.99+

third yearQUANTITY

0.99+

FlashStackTITLE

0.99+

firstQUANTITY

0.99+

10 years laterDATE

0.99+

Barcelona, SpainLOCATION

0.99+

EMEARORGANIZATION

0.98+

20QUANTITY

0.98+

both firmsQUANTITY

0.98+

couple of years agoDATE

0.98+

AnsibleORGANIZATION

0.98+

Cloud Block StoreTITLE

0.98+

first timeQUANTITY

0.98+

PureORGANIZATION

0.98+

DevNetTITLE

0.97+

this weekDATE

0.97+

todayDATE

0.96+

two friction pointsQUANTITY

0.96+

two worldsQUANTITY

0.96+

MarkPERSON

0.96+

EMEAORGANIZATION

0.96+

Pure StorageORGANIZATION

0.96+

bothQUANTITY

0.95+

over 17,000QUANTITY

0.95+

theCUBEORGANIZATION

0.94+

oneQUANTITY

0.92+

Simplifying Blockchain for Developers | Esprezzo


 

from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape so cube conversations simplifying blockchain for developers remi karpadito is here is the CEO of espresso remy thanks for coming in yeah thanks for having yeah so you guys are in the Seaport we want to hear all the action that's going on there but let's start with espresso CEO founder or co-founder um not a co-founder founder okay good just to clarify with respect to your co-founders voice why did you guys start espresso yeah no it starts back on in a little bit little while ago we originally wanted to and a replace our first company was a company called campus towel and we want to replace student identity with NFC chips and smart phones and it was a really cool concept back in 2010 but at the time there's only one phone that had the technology capable of pulling the south and we built a prototype with that smart phone as a Samsung phone at the time and we brought that around to a dozen plus colleges showing hey you could replace the student ID with the phone you can just tap your phone to it for attendance for events etc and they loved it but everyone had the same question you know when is the iPhone can have the technology and we were three years early the iPhone didn't come up with NFC chips until 2013 and we ended up hitting into a mentoring platform and scaled that company October 70 colleges across the country but ironically enough we came back to the same issue a lot of CIOs and CTOs wants to interface with their single sign-on servers which required us to support this legacy technology you know so AJ and I spun back internally AJ's our co-founder and CTO to identify how can we replace identity again but instead of using hardware and smartphones let's use the blockchain and AJ was an early a Bitcoin adopter back in 2010 mining Bitcoin really I'm passionate about the technology and I started learning a little bit more about it and trying to find a way to incorporate blockchain technology into our student identity solution as a secondary offering for Campus Tau but we quickly realized was that our front-end engineering team who is a little bit underwater in terms of the technical skills that needed to help and participate in the development for the boccie an identity solution so we ended up building up to middleware components to help them with the development and that's where we saw kind of that's where the lightbulb went off and the bigger opportunity came about where a lot of the infrastructure and tooling needed in order to build a production level blockchain application isn't quite there yet ice we ended up hitting and building a new company called espresso to make botching development more accessible so let's talk about that that the challenge that your developers face so you were at the time writing in for aetherium and in solidity right which is explain to our audience why that's so challenging what is solidity yeah and and why is it so complex yes illinit e is a JavaScript based framework for writing smart contracts on in the etherion platform it's not a fully baked or fully developed tools that yet in terms of the language there's some nuances but on top of that you also need to understand how to support things like the infrastructure so the cryptography the network protocols so if you want to sustain your own blockchain there's a lower-level skill set needed so the average JavaScript engineering could be a little bit kind of overwhelmed by what's needed to actually participate in a full-blown botching development yes and they're probably close to 10 million JavaScript engineers worldwide so it sounds like your strategy is to open up blockchain development to that massive you know resource yeah and in JavaScript being a definite core focus out of the gates and will be developing a plethora of SDKs including JavaScript and Python and Ruby etc in the thought process is you know activating these engineers that have coming new code academies or Enterprise engineers that really get a C++ or another language and allowing them to code in the languages they already know and allow them to participate the blockchain development itself okay and so how many developers are on your team so we've it's a small ad product teams three people on a parodic team now but we're actually the process is killing that up yeah so those guys actually had to go on the job training so they kind of taught themselves and then that's where you guys got the idea said okay yeah exactly and we realized that you know if we could build out this infrastructure this tooling layer that just allows you compile the language as you know into the software or the blockchain side it can make it a much more accessible and then also the other thing too that's interesting it's not just kind of writing the languages they already accustomed to but it's also the way you architect these blockchain solutions and one thing we've realized is that a lot of people think that you know every piece of data needs to live on the blockchain where that's really not something I've been teachers for you to do so because it's really expensive to put all the data on the blockchain and it's relatively slow right now with ethereum of 30 transactions per second there's companies like V chain that are looking to remedy some of those solutions with faster write data write times but the thought process is you can also create this data store and with our middleware it's not just an SDK but it's a side chain or a really performant in-memory based data store they'll allow you to store off chain data it's still in a secure fashion through consensus etc that can allow you to write data rich or today's level applications on the blockchain which is really kind of the next step I see coming in the Box chain space so I'm gonna follow up on when coaching there I mean historically distributed database which is what blockchain is it's been you know hard to scale it's like I say low transaction volumes they had to pick the right use cases smart contracts is an obvious one yeah do you feel as though blockchain eventually you mentioned V chain it sounds like they're trying to solve that problem will eventually get there to where it can can compete with the more centralized model head on and some of you know the more mainstream apps yeah and that's and that's kind of where we are because our thought process if we were to move campus topic the kind of private LinkedIn for colleges per se on to the blockchain back when we started it wouldn't be possible so how do you store this non pertinent data this transactional or not even transactional this attribute data within a boxing application and that's really where that second layer solution comes into play and you see things like lightning Network for Bitcoin etc and plasma for aetherium but creating this environment where a developer comes on they create an account they name their application they pick their software language and then they pick their blockchain there's pre-built smart contract we offer them but on top of that they already have this data store that they can leverage these are things that people already accustomed to in the web 2.0 world these are the caching layers that everyone uses things like Redis etcetera that we're bringing into the blockchain space that well I that we believe will allow this kind of large-scale consumer type application well when you think about blockchain you think okay well he thinks it's secure right but at the same time if you're writing in solidity and you're not familiar with it the code could be exposed to inherent security flaws is that so do you see that as one of the problems that you're solving sort of by default yeah I think one thing here is that I kind of as you write a smart contract you need to audit you test it so on and so forth and so we're helping kind of get that core scaffolding put up for the developer so they don't need to start from scratch they don't need to pull a vanilla smart contract off of a open source library they can leverage ones that are kind of battle tested through our through our internal infrastructure so the last part of our kind of offering is this marketplace of pre developed components that developers can leverage to rapidly prototype or build their applications whether it be consumer engineer or enterprise that one and you were developer what's your back my background yeah so I studied entrepreneurship and Information Systems so I do have I was a database analyst at fidelity it was my last job in the corporate world so I do have some experience developing nowhere near that of my co-founder AJ or some of our other but but yeah I understand the core concepts pretty well well speaking blockchain who if she was talking about obviously you you see a lot of mainstream companies obviously the banks are all looking at it you're seeing companies we just you know heard VMware making some noise the other day you're at certainly IBM makes a lot of noise about smart contracts so you're seeing some of these mainstream enterprise tech companies you know commit to it what do you see there in terms of adoption in the mainstream yeah no I think the enterprise space is gonna want to fully embrace this technology first I think the consumer level we're still a little bit ways away there just because this infrastructure and this tooling is needed before developers kind of get there but from the enterprise space what we see I mean obvious things like supply chain being a phenomenal use case the blockchain technology Walmart IBM are already implementing really cool solutions one of them my advisors Rob Dulci is the president of Asia and they've successfully implemented several blockchain projects from car parts manufacturers to track and trace through wine seeds and this from grape seeds and so there's a lot of different use cases in the supply chain side identity is really exciting Estonia is already doing some really cool work with digital identities that's gonna have a big impact voting systems etc but also thinking through some newer concepts like video streaming and decentralization of Network Maps and so there's many different use cases and for us we're not trying to necessary solve like a dis apply chain problem or anything we're trying to give a set of tools that anyone can use for their verticals so we're excited to see kind of what a spreads used for and over the next several months to here I remember you mentioned V chain before so explain what V chain is and now your what you're doing with those guys yes if V chain is another kind of next generation blockchain they're they're v chain Thor is the new platform and actually their main net launch is tomorrow and they're really excited they're introducing heightened security faster block times more transactions per second they have a really interesting governance model that I think is a good balance between pure decentralization in the centralized world which i think is that that intermediate step that a lot of these enterprises are going to need to get to end of the block chain space and we're working with them or lon on their platform so our token sale will be run through V chain which is great in addition we'll be working with them with through strategic partnerships and the goal is have espresso be the entry point for developers coming into V chain so we'll help kind of navigate the waters and kind of have them leverage the pre-built smart contracts and get more developers into the ecosystem okay let's talk about your token sale so you're doing the utility token yep and so that means you've actually got utility in the token so how is that utility token being utilized within your community yeah so the data actually the token is used to meter and mitigate abuse in the platform as well so at every single transaction it'll validate the transaction in addition it will be an abstraction layer since we do speak to multiple block chains that ezpz token will have to abstract up to aetherium to Thor which is the V chain token the future dragon chain etc so that's a really interesting use case and one of the interesting things we're trying to solve right now if you're a developer trying to come in and use it it cryptocurrency for development you need to go to something like a coin base you have to exchange fiat to aetherium you have to push that out to a third party exchange you have to do a trade and then you have to send that digital wallet address where you get easy peasy Oh to our account after that's a ton of friction and that's more friction if you're not a crypto person you're gonna be what is it you're gonna be asking to do it yeah so we're talking to some pretty big potential partners that allow kind of they would be the intermediate intermediary or money service to allow a seamless transition for engineer just to come straight onto espresso put down a credit card bank account verified go through the standard kyc AML process and then be able to get easy peasy in real time and that's something that at a macro level I think is one of the biggest barriers to entry in the botching space today so what do you call you your token easy-peasy okay so you're making that simple transparent done so you're doing a utility token you do in a raise where are you at would that raise give us the details there yeah yes so we just close our friends and family around we're not private sale right now are working closely with the VA in the VA chain foundation helping kick that off right now as well and we're yeah this is gonna be much more strategic capital in this round and then after that we'll be moving into since we are partnered with each a in their community gets a little bit of exclusivity in the next piece of the round so their master note holders will get a bigger discount in the next round and then the last round will be the public round for the general community and that's where we anticipate a lot of developers we already have development shops coming on participating in the first round which is great because the thought process is we want to get as many developers in this platform as possible throughout the summer and I think that's one of the most unique things about the token sales it's not just raising capital it's actually getting people that want to use your product to buy him now and that's that's amazing so okay so you're doing the private sale first right and you open that up to those types of folks that you just mentioned and they get some kind of discount on the on the token because they're there in early and they're backing you guys early and then you guys got a telegram channel I know it was on the recently anything is exploding it looks like a pretty hot you know offering and then then what happens next then you open it up to just a wider audience we start getting the core community members from V chain and then after that the public sale will be really targeted for the unused these are the people that you know need to put in a large substantial amount of capital again and at that point you could put in a couple hundred dollars and actually participate in in the token sale and you'd be getting in the kind of ground Florida sand and the SEC just made a ruling you know recently a week ago or so that Bitcoin and in aetherium were not security so that's a good thing nonetheless you as a CEO and entrepreneur you must have been concerned about you know a utility token and making sure everything's clean that there actually is utility you can't just use the utility token to do a raise and then go build the products you have you had it you have a working product right yeah so there's a lot of functionality already set up and we're going to continue to iterate before we even get close to the actual tokens or the public sale right so we anticipate having full functionality of what we want to get out there to the development world by the end of the sale so it's the thing that we I think one of the biggest things in this space right now in terms of the law and compliance side is a lot of self regulation since in the u.s. in particular it's such a great area you need to one stay up-to-date with every single hearing announcement but also really make sure you're you're taking best practices with kyc AML making sure the people you know good people that are investing into the comm or I've kind of participating in the allocation and and that's something we you know we've spent a lot of time with our legal team I've got pretty intimate with our lawyers and really understanding kind of the nuances of this space over time what about domicile what can you advise people you know based on your experience in terms of domicile yeah I'm not a lawyer but based on our experience I mean there's some great places over in in Europe you know Switzerland Malta Gibraltar we're down on the came in and also Singapore there's a you know these different legislature or jurisdictions are writing new law to support the effort and I think that's gonna continue to happen and I hope it happens in the u.s. too so we remove some of this nuance and gray areas that people can feel more comfortable operating and I think that's gonna happen hopefully soon in the next six months or so we'll see but as long as more guidance continues to come out I think we can operate or people can operate in the US I know a lot of people are moving offshore like we did so just something that's gonna it's a tough area right now well it gives you greater flexibility um and it like you said it's less opaque so you can have more confidence that what you're gonna do is on the up-and-up because as an entrepreneur you don't want you know I'm not gonna worry about compliance you just want to do your job and write great code and execute and build a company and so I mean I feel I don't know if you agree that the u.s. is a little bit behind you know this is kind of really slow to support entrepreneurs like yourselves like like us we'd like more transparency and clarity and you just can't seem to get a decision you're sort of in limbo and you got to move your business ahead so you make a decision you go to the Caymans you go to Switzerland you go to Malta and you move on right so and I think it's interesting too and you know a lot of what the SEC did in the beginning there's a ton of bad actors out there just as well and there's a bunch of good actors too so again if you yourself regulate you play you really understand what you need to do to be compliant you should be fine but again I think the flexibility you get right now is the more kind of defined law and some these other jurisdictions makes a lot of it yeah and I don't mean to be unfair to SEC they are doing a job and they need to protect the little guy and protect the innocent no question I would just like to see them be more proactive and provide more clarity sooner than later so okay last question the Seaport scene in Boston you know we always compare Boston and silicon silicon valley you can't compare the two Silicon Valley's a vortex in and of itself but the Boston scenes coming back there's blockchain there's IOT the Seaport is cranking you guys are in the Seaport you live down there what are you seeing would give us a what's the vibe like ya know watching me just passed about a month ago it may be less and as the great turnouts I spoke at a few events a few hundred people kind of it each one which is great and it's interesting you get a good mix of Enterprise people looking to learn and educate themselves in the space you see the venture capital side moving into the space and participating in a lot of these larger scale events and it's definitely growing rapidly in terms of the blockchain scene in Boston and I spent some time in New York and that's another great spot to and an even think places like Atlanta and I was down in Denver I did a big presentation down in Denver which was awesome and and now the coolest thing about blockchain is it really is global I spent a lot of time in Asia and in Europe and speaking over there the the pure at like the tangible energy in the room is amazing and it's one of the most exciting things about the industry many people that in the space know we're on the cutting edge here we're on the this is a new frontier that we're building along the way being part of that and helping define that is pretty exciting stuff that's cool you know I said last question I lied I forgot to ask you a little bit more about your your team maybe you could you talk a team your team your advisors maybe you could just give us a brief yeah okay there my co-founder and CTO we've been working together since I believe my sophomore year at college so it's been a while and he's their original crypto a blockchain guy and and pushed us in the spaces leading to the product development on that from in the top of that we have Craig Gainsborough our CFO I actually spent a lot of time at PwC he was the North America tax and advisory CFO over there Jalen Lou is the director of product marketing Kevin coos the head of product he worked he was nominated for a Webby and then we have our ops team Kyle who's a former campus - a complete business deaf guy over there that's working on us from some of the other side on the advisory team we have a really good team sunny luke from the CEO and founder of e chain just came on eileen quentin the president of Dragon chain foundation that was the blockchain company spun out of Disney and then David for gamma is the co-founder and had a product at autonomy that's an IOT protocol really really cool stuff happening over there new new new program coming about Rob Dulci as the president of Asia in North America which is the supply chain company and they've already successfully deployed a handful of use cases and mihaela dr. mahele Uluru who is really interesting and in this sense that she was working on decentralized systems before they were called blockchain she worked with the professor in Berkeley that defined decentralized in technology and she speaks in the World Economic Forum frequently and is really just a global presenter so we have we feel like we have a really strong team right now and we're actually getting to the point of scaling so it's gonna be exciting to start bringing in some new people and picking up the momentum it's super exciting well listen congratulations on getting to where you are and best of luck going forward best of luck with the raise and and solving the problem that you're solving it's it's an important one and thanks for coming in the cube of course thank you so much you're welcome all right thanks for watching everybody we'll see you next time this is david onte

Published Date : Jun 29 2018

SUMMARY :

the people that you know need to put in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rob DulciPERSON

0.99+

KylePERSON

0.99+

AsiaLOCATION

0.99+

New YorkLOCATION

0.99+

2010DATE

0.99+

Craig GainsboroughPERSON

0.99+

Jalen LouPERSON

0.99+

BostonLOCATION

0.99+

DenverLOCATION

0.99+

EuropeLOCATION

0.99+

AtlantaLOCATION

0.99+

Rob DulciPERSON

0.99+

IBMORGANIZATION

0.99+

C++TITLE

0.99+

MaltaLOCATION

0.99+

SamsungORGANIZATION

0.99+

SwitzerlandLOCATION

0.99+

USLOCATION

0.99+

WalmartORGANIZATION

0.99+

JavaScriptTITLE

0.99+

PythonTITLE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

sunny lukePERSON

0.99+

first roundQUANTITY

0.99+

October 70DATE

0.99+

BerkeleyLOCATION

0.99+

2013DATE

0.99+

LinkedInORGANIZATION

0.99+

DisneyORGANIZATION

0.99+

SECORGANIZATION

0.99+

SingaporeLOCATION

0.99+

DavidPERSON

0.99+

RubyTITLE

0.99+

u.s.LOCATION

0.99+

North AmericaLOCATION

0.99+

PwCORGANIZATION

0.98+

a week agoDATE

0.98+

Boston MassachusettsLOCATION

0.98+

three peopleQUANTITY

0.98+

SeaportLOCATION

0.98+

SeaportORGANIZATION

0.98+

AJPERSON

0.97+

Silicon ValleyLOCATION

0.97+

second layerQUANTITY

0.97+

todayDATE

0.97+

espressoORGANIZATION

0.97+

first companyQUANTITY

0.97+

remi karpaditoPERSON

0.97+

tomorrowDATE

0.96+

oneQUANTITY

0.96+

one thingQUANTITY

0.96+

V chainORGANIZATION

0.96+

FloridaLOCATION

0.96+

mihaelaPERSON

0.95+

30 transactions per secondQUANTITY

0.95+

one phoneQUANTITY

0.95+

10 millionQUANTITY

0.94+

one thingQUANTITY

0.94+

twoQUANTITY

0.93+

firstQUANTITY

0.93+

a ton of frictionQUANTITY

0.93+

AJTITLE

0.92+

david ontePERSON

0.92+

EsprezzoPERSON

0.92+

next six monthsDATE

0.92+

gammaORGANIZATION

0.92+

a lot of peopleQUANTITY

0.9+

V chainORGANIZATION

0.9+

e chainORGANIZATION

0.9+

VMwareTITLE

0.9+

v chainORGANIZATION

0.89+

each oneQUANTITY

0.89+

siliconLOCATION

0.89+

Campus TauORGANIZATION

0.89+

CaymansLOCATION

0.88+

EstoniaORGANIZATION

0.87+

Kevin coosPERSON

0.87+