Image Title

Search Results for Ganesh:

Ganesh Pai, Uptycs | AWS re:Invent 2022


 

(upbeat music) >> Hello, fellow cloud nerds and welcome back to AWS re:Invent here in a beautiful sin city. We are theCUBE. My name is Savannah Peterson, joined by my dear colleague and co-host Paul Gillon. Paul, last segment. >> Good thing too. >> Of our first re:Invent. >> A good thing too 'cause I think you're going to lose your voice after this one. >> We are right on the line. (laughter) You can literally hear it struggling to come out right now. But that doesn't mean that the conversation we're going to have is not just as important as our first or our middle interview. Very excited to have Ganesh from Uptycs with us today. Ganesh, welcome to the show. >> Savannah and Paul, thank you for having me here. >> It's a pleasure. I can tell from your smile and your energy. You're like us, you've been having a great time. How has the show been for you so far? >> Tremendous. Two reasons. One, we've had great parties since Monday night. >> Yes. Love that. >> The turnout has been fantastic. >> You know, honestly you're the first guest to bring up the party side of this. But it is such, and obviously there's a self-indulgence component of that. But beyond the hedonism. It is a big part of the networking in the community. And I love that you had a whiskey tasting. Paul and I will definitely be at the next one that you have. In case folks aren't familiar. Give us the Uptycs pitch. >> So we are a Boston based venture. What we provide is cloud infrastructure security. I know if you raise your hand. >> Hot topic. >> Yeah, hot topic obviously in given where we are. But we have a unique way of providing visibility into workloads from inside the workload. As well as by connecting to the AWS control plane. We cover the entire Gartner acronym soup, they call it as CNAP. Which is cloud native application protection platform. That's what we do. >> Now you provide cloud infrastructure security. I thought the cloud providers did that. >> Cloud providers, they provide elements of it because they can only provide visibility from outside in. And if you were to take AWS as an example they give you only at an account level. If you want to do things at an organization where you might have a thousand accounts. You're left to fend to yourself. If you want to span other cloud service providers at the same time. Then you're left to fend to yourself. That's why technologies like us exist. Who can not only span across accounts but go across cloud and get visibility into your workload. >> Now we know that the leading cause of data loss in the cloud or breaches if you'll call them, is misconfiguration. Is that something that you address as well? >> Yes. If you were to look at the majority of the breaches they're due to two reasons. One, due to arguably what you can call as vulnerabilities, misconfigurations, and compliance related issues. Or the second part, things related to like behavioral nature. Which are due to threats. Which then result in like some kind of data loss. But misconfiguration is a top issue and it's called a cloud security posture management. Where once you scope and assess what's the extent of misconfigurations. Maybe there's a chance that you go quickly remediate it. >> So how do you address that? >> Oh, yeah. >> How does that work? So if you were to look at AWS and if you were to think of it as orchestration plane for your workload and services. They provide a API. And this API allows you to get visibility into what's your configuration looking like. And it also allows you to like figure out on an ongoing basis. If there are any changes to your configurations. And usually when you start with a baseline of configuration and as a passage of time. Is where misconfigurations come into play. By understanding the full stream of how it's been configured and how changes are occurring. You get the chance to like go remediate any kind of misconfigure and hence vulnerabilities from that. >> That was a great question Paul. And I'm sure, I mean people want to do that. 23 billion was invested in cybersecurity in 2021 alone, casual dollar amount. I can imagine cybersecurity is a top priority for all of your customers. Probably most of the people on the show floor. How quickly does that mean your team has to scale and adapt given how smart attacks and various things are getting on the dark side of things? >> Great question. The biggest bigger problem than what we are solving for scale is the shortage of people. There's the shortage of people who actually know. >> I was curious about that. Yeah. >> So a shortage of people who understand how to configure it. Let alone people who can secure it like with technology like ours, right? So if you go in that pecking order of pull. It's people and organizations like us exist. Such that at scale you can identify these changes. And help enable those people to quickly scope and assess what's wrong. And potentially help them remediate before it really goes out of control. (metal clinking) >> This is the so-called XDR part of your business, right? >> Yes. So there are two parts. One is around the notion of auditing and compliance and getting visibility. Like the first question that you asked around misconfiguration. And that's one part what we do from the control plane of the cloud. The second part is more behavioral in nature. It results from having visibility into the actual workload. For example, if there's been a misconfiguration. If it's been exploited. You then want to reduce the type well time to figure out like. What really is happening in case there's something potentially nefarious and malicious activity going on. That's the part where XDR (metal clinking) or CWPP comes into play where it's basically called as detection and response of cloud workload protection. >> And how is, it's a fairly new concept, XDR. How is the market taking to it? How popular is this with the customer? >> XDR is extremely popular. So much so that thanks to Gartner and other top analysts. It's become like a catchall for a whole bunch of things. So it's popularity is incredibly on the rise. However, there are elements of XDR the last two part detection and response. Which are like very crucial. X could stand for whatever it is it's extended version. As applied to cloud there's a bunch of things you can do as applied to like laptops. There's a bunch of things it can do. Where we fit into the equation is. Especially from a AWS or a cloud-centric perspective. If the crown jewels of software are developed on a laptop. And the journey of the software is from the laptop to the cloud. That's the arc that we protect. That's where we provide the visibility. >> Mm. >> Wow, that's impressive. So I imagine you get to see quite a few different trends. Working with different customers across the market. What do you think is coming next? How are you and your brilliant team adapting for an ever-changing space? (nails tapping) >> That's a great question. And this is what we are seeing especially with some of our large barrier customers. There's a notion of what's emerging what's called a security as infrastructure. >> Mm. >> Unlike security traditionally being like an operational spend. There's a notion investing in that. Look, if you're going to be procuring technology from AWS as infrastructure. What else will you do to secure it? And that's the notion that that's really taking off. >> Nice. >> You are an advocate of what you call shift up the shift up approach to security. I haven't heard that term before. What is shift? >> Me either. >> I sure have heard of shift left and shift right? >> Yes. >> But what is shift up? >> Great question. So for us, given the breadth of what's possible. And the scale at which one needs to do things. The traditional approach has been shift left where you try to get into like the developer side of laptops. Which is what we do. But if you were to look at it from the perspective that the scale at which these changes occur. And for you to figure out if there is anything malicious in there. You then need to look across it using observability techniques. Which means that you take a step up and look across the complete spectrum. From where the software is developed to where it's deployed. And that's what we call as shift up security. Taking it up like one level notch and looking at it using a telemetry driven approach. >> Yeah, go for it. >> So telemetry driven. So do you integrate with the observability platforms that your customers are using? >> Yeah, so we've taken a lot of cues and IP from observability techniques. Which are traditionally applied to like numerical approaches to figuring out if things are changing. Because there's a number which tells you. And we've applied that to like state related changes. We use similar approach, but we don't look at numbers. We look at what's changing and then the rate of change. And what's actually changing allows us to figure out if there's something malicious. And the only way you can do it at scale by getting the telemetry and not doing it on the actual workload. >> I'm curious, I'm taking, this is maybe your own thought leadership moment. But I as we adapt to nefarious things. Love your use of the word nefarious. Despite folks investing in cybersecurity. I mean the VCs are obviously funding all these startups. But not, but beyond that it is a, it's a huge priority. Breaches still happen. >> Yes. >> And they still happen all the time. They happen every day, every second. There's probably multiple breaches happen. I'm sure there are multiple breaches happening right now. Do you think we'll get to a point where things are truly secure and these breaches don't continue to happen? >> I'd love to say that (crowd cheering) the short answer is no. >> Right? (laughing) >> And this is where there are two schools of thought. You can always try to figure out is there a lead up? With a high degree of conviction that you can say there's something malicious? The second part is you figure out like once you've been breached. How do you reduce the time by like figuring out your dwell time and like meantime to know. >> Nice. So we have a bit of a challenge. I'm going to put his in the middle of this segment. >> Oh, okay. >> I feel like spicing it up for our last one. >> All right. >> I'm feeling a little zesty. >> All right. >> We've been giving everyone a challenge. This is your 30 seconds of thought leadership. Your hot take on the most important theme for, for you coming out of the show and looking towards 2023. >> For us, the most important thing coming out of the show is that you need to get visibility across your cloud from two perspectives. One is from your workload. Second, in terms of protecting your identity. You need to protect your workload. And you need to protect your identity. And then you need to protect the rest of the services. Right? So identity is probably the next perimeter in conjunction with the workload. And that is the most important theme. And we see it consistent in our customer conversations out here. >> Now when you say identity are you referring to down to the individual user level? >> At a cloud level, when you have both bots as well as humans interacting with cloud and you know bringing up workloads and bringing them down. The potential things which can go wrong due to like automated accounts. You know, going haywire. Is really high. And if some privileges are leaked which are meant only for automation. Get into the hands of people they could do inflict a lot of damage, right? So understanding the implications of IAM in the realm of cloud is extremely important. >> Is this, I thought zero trust was supposed to solve for that. How, where does zero trust fall short? >> So zero trust is a bigger thing. It could be in the context of someone trying to access services from their laptop. To like a, you know email exchange or something internal >> Hm. >> on the internet. In a similar way, when you use AWS as a provider. You've got like a role and then you've got like privileges associated with the role. When your identity is asserted. We need to make sure that it's actually indeed you. >> Mm. >> And there's a bunch of analytics that we do today. Allow us to like get that visibility. >> Talk about the internal culture. I'm going to let you get a little recruiting sound bite. >> Yes. >> Out of this interview. What, how big is the team? What's the vibe like? Where are you all based? >> So we are based in Boston. These days we are globally distributed. We've got R and D centers in Boston. We've got in two places in India. And we've got a distributed workforce across the US. Since pre-pandemic to now we've like increased four X or five X from around 60 employees to 300 plus. And it's a very. >> Nicely done. >> We have a very strong ethos and it's very straightforward. We are very engineering product driven when it comes to innovation. Engineering driven when it comes to productivity. But we are borderline maniacal about customer experience. And that's what resulted in our success today. >> Something that you have in common with AWS. >> I would arguably say so, yes. (laughter) Thank you for identifying that. I didn't think of it that way. But now that you put it, yes. >> Yeah, I think. One of the things that I've loved about the whole show. And I am curious if you felt this way too. So much community first, customer first, behavior here. >> Yeah. >> Has that been your take as well? >> Yes, very much so. And that's reflected in the good fortune of our customer engagement. And if you were to look at our. Where has our growth come from? Despite the prevalent macroeconomic conditions. All our large customers have doubled on us because of the experience we provide. >> Ganesh, it has been absolutely fantastic having you on theCUBE. Thank you so much for joining us today. >> Yes, thank you. And if I may say one last thing? >> Of course you can. >> As, a venture, we've put together a new program. Especially for AWS Re:Invent. And it allows people to experience everything that Uptycs has to offer up to a thousand endpoints for a dollar. It's called as the Uptyc Secret menu. >> Woo. >> Go to Uptycsecretmenu.com and you'd be available to avail that until the end of the year. >> I'm signing up right now. >> I know. I was going to say, I feel like that's the best deal of reinvent. That's fantastic Ganesh. >> Yes. >> Well again, thank you so much. We look forward to our next conversation. Can't wait to see how many employees you have then. As a result of this wonderful recruitment video that we've just. >> We hope to nominally double. Thank you for having me here. (laughter) >> Absolutely. And thank all of you for tuning into our over 100 interviews here at AWS re:Invent. We are in Las Vegas, Nevada. Signing off for the last time with Paul Gillon. I'm Savannah Peterson. You're watching theCUBE, the leader in high tech coverage. (upbeat music fading) (upbeat music fading)

Published Date : Dec 2 2022

SUMMARY :

We are theCUBE. 'cause I think you're going to We are right on the line. thank you for having me here. How has the show been for you so far? One, we've had great at the next one that you have. I know if you raise your hand. We cover the entire Gartner Now you provide cloud And if you were to take AWS as an example data loss in the cloud or breaches If you were to look And it also allows you to like Probably most of the for scale is the shortage of people. I was curious about that. So if you go in that of the cloud. How is the market taking to it? is from the laptop to the cloud. How are you and your brilliant team And this is what we are seeing And that's the notion that of what you call And for you to figure out So do you integrate And the only way you can do it I mean the VCs are obviously Do you think we'll get the short answer is no. that you can say there's I'm going to put his in the I feel like spicing for you coming out of And you need to protect your identity. of IAM in the realm of cloud supposed to solve for that. It could be in the context when you use AWS as a provider. of analytics that we do today. I'm going to let you get What, how big is the team? And it's a very. it comes to innovation. Something that you have But now that you put it, yes. And I am curious if you felt this way too. And if you were to look at our. Thank you so much for joining us today. And if I may say one last thing? And it allows people to Go to Uptycsecretmenu.com the best deal of reinvent. how many employees you have then. Thank you for having me here. And thank all of you for tuning

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SavannahPERSON

0.99+

Paul GillonPERSON

0.99+

Savannah PetersonPERSON

0.99+

BostonLOCATION

0.99+

PaulPERSON

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

IndiaLOCATION

0.99+

30 secondsQUANTITY

0.99+

USLOCATION

0.99+

first questionQUANTITY

0.99+

one partQUANTITY

0.99+

two placesQUANTITY

0.99+

2021DATE

0.99+

two reasonsQUANTITY

0.99+

OneQUANTITY

0.99+

SecondQUANTITY

0.99+

second partQUANTITY

0.99+

Monday nightDATE

0.99+

2023DATE

0.99+

Uptycsecretmenu.comOTHER

0.99+

firstQUANTITY

0.99+

23 billionQUANTITY

0.99+

GartnerORGANIZATION

0.99+

todayDATE

0.99+

over 100 interviewsQUANTITY

0.98+

first guestQUANTITY

0.98+

GaneshPERSON

0.98+

300 plusQUANTITY

0.98+

Two reasonsQUANTITY

0.98+

both botsQUANTITY

0.98+

Las Vegas, NevadaLOCATION

0.98+

UptycsORGANIZATION

0.98+

around 60 employeesQUANTITY

0.97+

two perspectivesQUANTITY

0.97+

two schoolsQUANTITY

0.97+

five XQUANTITY

0.97+

Ganesh PaiPERSON

0.94+

zero trustQUANTITY

0.93+

doubleQUANTITY

0.92+

one levelQUANTITY

0.9+

four XQUANTITY

0.89+

CNAPORGANIZATION

0.86+

one last thingQUANTITY

0.79+

a dollarQUANTITY

0.79+

thousand accountsQUANTITY

0.73+

two partQUANTITY

0.72+

pandemicEVENT

0.72+

re:InventEVENT

0.71+

reEVENT

0.7+

endDATE

0.7+

every secondQUANTITY

0.69+

AWS re:InventEVENT

0.68+

up to a thousand endpointsQUANTITY

0.67+

AWSEVENT

0.67+

theCUBEORGANIZATION

0.67+

UptycsPERSON

0.65+

InventEVENT

0.63+

Re:TITLE

0.62+

Uptyc SecretTITLE

0.58+

re:Invent 2022EVENT

0.57+

thingsQUANTITY

0.54+

Anish Dhar & Ganesh Datta, Cortex | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: TheCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend and we are in a beautiful locale. The city itself is not that big, 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company and I think that's really the philosophy we have. When we built Cortex, is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. The engineers who write the services they're often the ones who understand all the intricacies with the service. What are the downstream dependencies, who's on call for this service? Where does the documentation live? All of these things I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively codify your best practices in some way and have that distributed to all engineers in the company. >> And to add to that, I think very concrete examples for an organization that's already modern like their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century, they're trying to get to Kubernetes. For them, quality means where are we in that journey? Are you on our latest platforms? Are you running CI, are you doing continuous delivery? Like quality can mean a lot of things and so our perspective is how do we give you the tools to say as an organization, here's what quality means to us. >> So at first, my mind was going through when you said quality, Anish, you started out the conversation about having this kind of non-codified set of measurements, historical knowledge, et cetera. I was thinking observability, measuring how much time does it take to have a transaction. But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff. And I think that's very high level and I think that's one perspective of what quality is, but as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> I'm kind of taken back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCUBE clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... What's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version, and then what? I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different sets of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. Like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where the areas of risk with my service across reliability or security or operation maturity. I think it gives us in insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standards for SLO that is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability or reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my API integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs, we want you to define your thresholds. We want you to be tracking them, are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be, ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you got to be tracking them. And the service catalog that data flows from a service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, you can say like, hey, for this service, we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans. Here's the APM, the monitors, the SLOs, the JIRA ticket is like all that stuff comes into a single place. And then our scorecards product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah, and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them. And scorecards can be used to one, which teams are actually meeting these guidelines? And then two, let's get those teams adopted on SLOs. Let's track that, you can do all of that in Cortex, which is I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering if not, you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define this scorecard. And that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful and those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service, you can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices, so I reduce call wait time on my customer service calls. So I can easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard, but what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run books to quickly roll back to an older version that we know is running. That way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind of the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics. And then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics like are we doing things the right way, versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely, I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock? >> Time for the Q clock. Start the Q clock. (upbeat music) Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on. You're you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this. For a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my engineering team actually producing. >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that can- >> A goal without a timeline is just a dream. >> And CTO comes in and they say, here's what we care about. Here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's where Cortex can provide. We can give you that visibility that you never have before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube. The leader in high tech coverage. (soft music) (soft music) >> Narrator: TheCube presents Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend. And we are in a beautiful locale. The city itself is not that big 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company. And I think that's really the philosophy we have when we build Cortex is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. Engineers who write the services, they're often the ones who understand all the intricacies with the service. What are the downstream I dependencies, who's on call for this service, where does the documentation live? All of these things, I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively like codify your best practices in some way, and have that distributed to all engineers in the company. >> And to add to that, I think like very concrete examples for an organization that's already modern their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century. They're trying to get to Kubernetes. For them quality means like, where are we in that journey? Are you on our latest platforms? Are you running CI? Are you doing continuous delivery? Like quality can mean a lot of things. And so our perspective is how do we give you the tools to say as an organization here's what quality means to us. >> So at first my mind was going through when you said quality and as you started out the conversation about having this kind of non codified set of measurements, historical knowledge, et cetera. I was thinking observability measuring how much time does it take to have a transaction? But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff and I think that's very high level. And I think that's one perspective of what quality is. But as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of, are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> Wow, I'm kind of taken me back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCube clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... what's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version and then what, I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different set of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where are the areas of risk with my service across reliability or security or operation maturity. I think it gives us insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standard for SLO. That is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability and reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my APM integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs. We want to define your thresholds. We want you to be tracking them. Are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be. Ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you've got to be tracking them. And the service catalog that data flows from the service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, we can say like, hey, for this service we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans, here's the APM, the monitor, the SLOs, the JIRA ticket, like all that stuff comes into a single place. And then our scorecard product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them? And scorecards can be used to one, which teams are actually meeting these guidelines? And then two let's get those teams adopted on SLOs. Let's track that. You can do all of that in Cortex, which is, I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define the scorecard and that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful. And like those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service. You can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level. Saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices so I reduce call wait time on my customer service calls. So, I could easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard. But what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run book to quickly roll back to an older version that we know is running that way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics and then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics? Are we doing things the right way versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on like the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely. I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right, Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock >> Time for the Q clock. Start the Q clock. (upbeat music) >> Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on, you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this, for a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my enduring team actually producing- >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that you can- >> A goal without a timeline is just a dream. >> And a CTO comes in and they say, here's what we care about, here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's what Cortex can provide. We can give you that visibility that you never had before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube, the leader in high tech coverage. (soft music)

Published Date : May 20 2022

SUMMARY :

Brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think What's the first process you do that has all the information you need So how do you help me we want you to define your thresholds. And so we can start mapping adopted SLOs in the first place? in the end to end process And so you can start seeing like trends So I'm going to give you And so if you have those metrics coming in and Cortex lets you represent that. the speed round here. Time for the Q clock. You're you're 10 seconds in. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never have before. And hey, your SRE can't And you're watching theCube. 2022 brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think And I think later on, this that has all the information you need So how do you help me And the service catalog that data flows And so we can start mapping You can do all of that in the end to end process And so you can start seeing So I'm going to give you And so if you have those metrics coming in I think just to touch on the reporting, the speed round here. Time for the Q clock. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never had before. And hey, your SRE can't And you're watching theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnishPERSON

0.99+

Keith TownsendPERSON

0.99+

CortexORGANIZATION

0.99+

80%QUANTITY

0.99+

KeithPERSON

0.99+

Red HatORGANIZATION

0.99+

USLOCATION

0.99+

GaneshPERSON

0.99+

21st centuryDATE

0.99+

100,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

five secondsQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

800,000 peopleQUANTITY

0.99+

CortexTITLE

0.99+

Valencia SpainLOCATION

0.99+

one elementQUANTITY

0.99+

one aspectQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

one perspectiveQUANTITY

0.99+

DatadogORGANIZATION

0.99+

one componentQUANTITY

0.99+

Ganesh DattaPERSON

0.98+

OneQUANTITY

0.98+

SLOTITLE

0.98+

2022DATE

0.98+

first stepQUANTITY

0.98+

KubeconORGANIZATION

0.97+

about 800,000 peopleQUANTITY

0.97+

one clickQUANTITY

0.97+

Ganesh Subramanian, Gainsight | Comcast CX Innovation Day 2019


 

>> From the heart of Silicon Valley, it's the CUBE, covering COMCAST Innovation Day, brought to you by COMCAST. >> Hey welcome back here ready Jeff Frick here with the CUBE. We're at the COMCAST Silicon Valley Innovation Center. You know there's innovation centers all over Silicon Valley we hadn't been to the COMCAST one until we came to this event, it's very very cool, I think it's like five storeys in this building, where they're developing a lot of new technologies, partnering with technologies, but today the focus is on customer experience, brought together a panel of people to talk about some of the issues, and we're excited to have a representative from a company that's really out on the edge of defining customer experience, and measuring customer experience, joined by Ganesh Subramanian. He is the senior director of product marketing for Gainsight. Ganesh, great to see you. >> Great, happy to be here. >> Yeah, so I'm a huge Nick Mehta fan, I've interviewed him before I've been following Gainsight for a long time and you know, it really struck me the first time that, that Nick said you know CRM is you know, basically order management. It's not customer relationship management, you know customer relationships are complicated, and they're multi-faceted and there's lots of touch points and you guys really try to build a solution to help customers manage, actually do manage that relationship so they have a great experience with their customer. >> Yeah that's totally right, and not to say that CRM isn't an important ingredient when you make that cake, but there's a lot of other touchpoints right? How are people interacting with your digital products? What is that customer journey across sales, services, support? How does all of that come together? So what Gainsight does is really provide the customer cloud to bring all of those solutions together so that businesses can really operate in a more customer-centric way. >> So you, you said an interesting thing earlier in the conversation about customer success being measured just by revenue, again kind of the CRMy kind of approach to worth versus measuring success and measuring lifetime value and measuring so many other things that can define a great relationship. What are some of those things that people should be thinking about? What are some of those other metrics out there beyond you know, did I get a, you know, a net increase value of my contract? >> Yeah absolutely you know one way we think about it at Gainsight is the two-by-two of customer experience and customer outcomes. So, you can think about experience just as how happy are people on a day-to-day basis interacting with you, your products, your organization, your team? The flipside's also true though. You can have the happiest customer, that isn't getting what they want out of their product or service. In a B2B context we think about it as tangible ROI or outcomes. So at Gainsight we're ultimately trying to make sure our clients are delivering on both of those vectors. They want happy, successful clients, ultimately that's going to lead to the recurring revenue cycle: retention, growth, adoption and advocacy. >> So where does that kind of tie together? 'Cause I'm sure there's a lot of people that think those are in conflict right? That if I bend over backwards and I provide this great experience and these great services and all these things that this is going to negatively impact my profitability, it's going to negatively impact my transactional value. How should they be measuring those things? How should they be balancing, 'cause 'cause, you know, you can sell dollars for 90 cents, have a really happy customer, not going to be in business very long. >> Yeah I think that's kind of the secret sauce right? True innovation, what we talked about today at COMCAST, a lot about, how do you take that next step forward? How do you improve your products and services in ways that make customers, customers for life? Right, and if you make the right investments, you actually find out that maybe it's, it's minor change, maybe it's process change in your call center or call service, maybe it's implementing AI in an appropriate way, so that you're able to deliver more value with less time, or maybe it's transformative, maybe it's something that's a new service you're offering all together, that's making customers get outsized or unrealized returns on their investment. Well, it doesn't matter what that investment was, if it's going to long term drive your company to higher valuations and greater competitive differentiation. So we don't think about customer experience on kind of below the line, what's going to get me the incremental ROI, we really think about it as a fundamental differentiator for your business. >> Right. Now you're in charge of, of kickin' off new products. >> That's right. >> And you know one of the things I think is really interesting about the COMCAST voice, which has had a lot of conversation today, is I still get emails from COMCAST telling me how I should use it! Right 'cause it's a different behavior, it's a different experience that I'm not necessarily used to. As you look forward, you know introducing new products, what are some of the, the kind of trends that you're keepin an eye on, what do you think is going to kind of change and impact some of the things you guys are bringing to market? What are some of the new things we should be thinking about in customer experience? >> Yeah absolutely. So one thing at Gainsight, one thing we've learned leading the customer success movement is that to be customer-centric is more than a given function, or a given team, customer success managers kind of took the mantle in B2B and started leading the charge, leading the way towards being more customer-centric but that team on their own can't do everything. Nor do they want to, or can they, right? So, one big change and one big innovation that we're leading the front on is how do you bring all those different teams together? Which is why we launched the Gainsight customer cloud. So what we're doing is we're bringing disparate data together, that used to be silohed in functional specific software, bring that into a single source of truth, to truly provide an actionable customer 360, one that provides meaning to different teams with the right context, and then drive action off of that. So whether it's an automated email to get, improve product adoption in the COMCAST example, or maybe it's some kind of escalation effort, where you need a cross-functional team to get together on the same page, to improve a red customer, or maybe it's something that's in the product itself, by just making the product easier to use or a little bit more intuitive, the, all of your end users will end up benefiting from that. What Gainsight's tryna do is to try figure out, how can we break down these walls across these different teams, make it easier for people to collaborate to improve the customer experience. >> So Ganesh I got to tease you right, 'cause everyone's eyes just rolled out when you said 360 view of the customer right, we've been talking about this forever. >> Yeah. >> So what's different, you know, what's different today? Not specifically for what you're tryna do with your product and share that too, but more generally, that, that we're getting closer to that vision. >> Yeah. >> That we're actually getting closer to delivering on, on the promise of a 360 view, and information from that view that will enable us to take positive action? >> I love that question, and I think whenever you hear the word 360 view or digital transformation, you're going to get a couple eye-rolls in the crowd right? And, I actually totally believe that, that, you know, to date I think we've done things in too much of a waterfall methodology. Let's spend three years, get a unified idea across all our disparate data sources, and then we're going to be customer-centric. I think we've learned our lessons over the course of time that, hey you know, the end result doesn't really materialize in the time frame and ROI you expected, so why don't we start with the other end of the spectrum? What are the gaps that customers are perceiving? If it's just, let me, go back to that example of product ease of use. Are we identifying that as a major gap? Then how do we go solve that? How do we reverse engineer that process? And by the way that doesn't just fall on the product team to make the product easier, services need to onboard customers more effectively, you need documentation so that they can access and understand the key aspects of your product in a more concrete way. So all of that needs to come together. So I think the biggest difference between what we used to talk about, with 360s and digital transformation, to where we are today, is really the context and the outcome you're trying to deliver, and then reverse engineer the 360 that's most meaningful to you. So to make that a little bit more clear, what does that mean at the grassroots level? If you're a services team member you're working on projects. Does a 360 view about the next opportunity from a financial or commercial perspective really matter to you? How far down in that 360 view do you have to scroll before you start seeing information that's relevant? So at Gainsight what we're trying to do is use a many-to-many relationship mapping so that if you're a services team member, or a sales member, the view you're accessing is curated to what you need to actually do. >> Right. >> And that'll drive adoption of the digital transformation efforts within your organization. >> Right. Which then obviously opens up the opportunity for automation and AI and ML to, as you said, context is so important to make sure the right information is getting to the right person at the right time for the context of the job that I have and building that customer relationship. >> That's right. Yeah we think about AI all the time how's that going to improve the customer experience? It starts with that data foundation and understanding hey what should we own and what should we leverage? And being very conscious about what you're about to do, and then second, thinking about those point problems and, again, reverse engineering how can we staff augment, or make the experience better, maybe make the lives of our employees a little bit better, when they're engaging with customers. Ultimately it's got to be in service of people. >> Right. Well Ganesh thanks for sharing your story. Again I think what you guys are doing, and Nick and Gainsight is so important in terms of redefining this beyond order management, and to actually customer relationship management. >> So, >> That's right. >> Thanks for spending a few minutes with us. >> Awesome. >> My pleasure. >> All - >> All right. >> Thank you. >> He's Ganesh I'm Jeff you're watching the CUBE we're at the COMCAST Innovation Center in Silicon Valley. Thanks for watching we'll see you next time.

Published Date : Nov 5 2019

SUMMARY :

brought to you by COMCAST. of the issues, and we're excited to have a representative and you guys really try to build a solution to help What is that customer journey again kind of the CRMy kind of approach to worth at Gainsight is the two-by-two and all these things that this is going to if it's going to long term drive your company Now you're in charge of, of kickin' off new products. and impact some of the things is that to be customer-centric So Ganesh I got to tease you right, So what's different, you know, what's different today? is curated to what you need to actually do. And that'll drive adoption of the digital transformation the right information is getting to the right person how's that going to improve the customer experience? Again I think what you guys are doing, Thanks for watching we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
COMCASTORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

90 centsQUANTITY

0.99+

GainsightORGANIZATION

0.99+

NickPERSON

0.99+

three yearsQUANTITY

0.99+

Nick MehtaPERSON

0.99+

Silicon ValleyLOCATION

0.99+

JeffPERSON

0.99+

Ganesh SubramanianPERSON

0.99+

bothQUANTITY

0.99+

GaneshPERSON

0.99+

oneQUANTITY

0.99+

CUBEORGANIZATION

0.98+

todayDATE

0.98+

360 viewQUANTITY

0.98+

COMCAST Innovation DayEVENT

0.98+

first timeQUANTITY

0.98+

GainsightPERSON

0.97+

twoQUANTITY

0.97+

secondQUANTITY

0.96+

360COMMERCIAL_ITEM

0.93+

single sourceQUANTITY

0.91+

ComcastEVENT

0.89+

CX Innovation Day 2019EVENT

0.85+

one thingQUANTITY

0.83+

one big innovationQUANTITY

0.82+

360sCOMMERCIAL_ITEM

0.81+

five storeysQUANTITY

0.79+

one bigQUANTITY

0.77+

COMCAST SiliconORGANIZATION

0.72+

360QUANTITY

0.66+

COMCASTTITLE

0.66+

Valley Innovation CenterLOCATION

0.64+

COMCAST Innovation CenterORGANIZATION

0.53+

Ganesh Bell, GE Power - GE Minds + Machines - #GEMM16 - #theCUBE


 

>> Welcome back everybody. Jeff Frick here with theCUBE we're in San Francisco at the Minds and Machines conference.  Three thousand people the fifth year of the show. Really everything about GE all the players from GE are here but are really being driven by the digital and the digitization of what was a bunch of stuff and still a bunch of stuff. But now we're digitizing it all. Yeah I'm really excited to get this bill saw you what nine months ago six months ago Timeflies to the Chief Digital Officer of chief power. Welcome. Great to see you again. >> Thank you. Thanks for being here. >> Absolutely. So just first impressions of this event. Pretty amazing. >> Yes it's gotten really bad. Right and I I remember stories of people telling me that hey this is the fifth one we're doing the first one we almost had like pull people to come here. Now we are like figure out how do we get to a bigger location because this is getting mainstream. Everybody is looking at how does digital help their business. Because in the industrial sector productivity had slowed down right over the last four or five years. It had become only 25 percent of what it used to be. So the biggest lever for productivity efficiency and creating new value is through digital transformation. It's not just automation. It's about creating new value new revenue from digital assets and that's why you see the excitement across all of the industries here. What's interesting you came from the I.T. world. >> Yeah there's already kind of been the digital transformation in the I.T. world that a lot of the I.T. stuff has now been Olek been turned into electronic assets right. You have no paper but that that can't happen in the OT world right. We still got generator just for gadget engines. You still got physical things but it's still a digital transformation. So how are those things kind of meshing together. Yeah so you know having worked in software all my career in Silicon Valley you write like you think about Cambridge with a belief that every business every industry will be reimagined with software. We've seen it in retail and music and entertainment and travel but there the software our aid the world. Yes software is going to aid the world but here software is transforming the world too because the physical assets matter. But all of the machines that we make for example in power we make machines that power the world more than one third of the world's electricity comes from a machine. Right. So all of these machines generate electrons but they also generate a lot of data more than you know two terabytes of data a day from a power plant can be generated. That's more data and more consumers will generate across an entire year old social media. So this data matters we can learn a lot from this data and make these machines efficient more productive and kind of like a 360 sexiest word for some of the industrialist is no unplanned downtime right. Element breakdowns which turns into massive productivity and value for our customers. The thing I think that would surprise most people Jeff talked about it in his keynote yesterday is that there has not been the kind of the long traditional productivity gains in the industrial machines themselves and you think wow they've been around for a long time. I would think they would be pretty pretty efficient. But in fact there's still these huge inefficiency opportunities to take advantage of with software which is why there's this huge kind of value creation opportunity. Absolutely. So now also think where the cycle time of innovation. Right. All of these are mechanical machines right. We know with advances in materials science and engineering and you know brilliant manufacturing we can get more out of the physical asset but that requires a big upgrade cycle. What if we agreed to the machine with software and that's really what we did in our businesses across power right where we called them edge applications where it's about improving the flexibility of a machine or they 50 of them. All of these are modeled and algorithms and the way to think about it is all these machines in fact outside we have a giant machine that powers this entire event. And you can see the digital twin version of that machine right here on the screen. All that is is a virtual representation of that machine from the physical world where we have all the thermal models the Trancy models the heat models the performance models all connected. But now we can run the simulation in real time all of the operation data and apply algorithms to get more performance out. A great example as we just launched one of the world's most efficient most flexible gas turbine a giant turbine called H.A.. >> But with the additional software we were able to improve the efficiency it's now the Guinness World Record holder as the most efficient flexible power plant in the world. That was then a brand new unit that was developed with the benefit of software or was that really applying a Software to our approach that was a brand new unit. But overlaid with software was able to eke out more efficiency as well. But we're doing this an older power plants as well. In fact a great story is we had a customer and Italy called A2A their multi utility company in Italy. They have a power plant and Cuba also in northern Italy. They had shut it down because it was no longer competitive to operate that power plant in the modern world where there was so much renewables. Because you got to compete in a market called ancillary services meaning you need to be able to quickly ramp up power when the wind doesn't blow or the sun doesn't shine bright and shouted down right away. You can't do that with giant power plants. What we did was we completely model that's how plant and software and digital trend we show them that this actually can be competitive. So with the addition of software we were able to reopen a power plant that was mothballed and jobs were reinstated and the Paul plan is actually flexible in the open competitive ancillary services market. So all of this is possible because of software we're able to breathe new life into big giant heavy machines. So just a year in the power space I'm just tired. You know we've seen kind of in the US. No the nukes are being turned turned off. >> I grew up in Portland got trojan on the Columbia River we could take field trips with the smoke come out the cooling tower. We've got the rise of renewables are really really really going crazy. He's got this crazy dynamics and the price of oil. How's that played. How are you guys helping kind of deal with this multimodal. It's interesting here that oil and gas is still its own separate group. I'm like they got it like we want to be part of the renewables and didn't just become energy and not renewables oil and gas nuclear etc.. So you know that's a great question the industry is oil and gas has lots of other things and downstream stream and so on. And but at least across all of the electricity businesses we're coming together. And we call this the electricity Value Network. Think about where we used to think about a value chain where the Greens got generated and they traveled to the consumer. It was a linear model. And we know from Silicon Valley when digital anchors industries they all become network model. Right. Right. So we're calling this the electricity Value Network. And the interesting thing is our customers have different mix of fuel. And every part of the geography in the world in North America is still a good mix. Renewables is on the rise in California. We're going to have 50 percent power from renewables by 2030. But you still have to balance and optimize the mix of power from gas and nuclear and other sources of fuel and hydro and steam and so on. Right. And in Europe it's our abundance of renewables. >> They're struggling to integrate them into the great abundance of renewables or abundant capacity right. Renewables are growing and so they have to integrate them better in China and India for example still coal and steam is the big source of power because that's the fuel they have. They don't have as much gas. So the mix of fuel will change the world. The beauty of software as we can help optimize the mix. In the past we always talked about renewables as a silver bullet or gas silver bullet. Now we're saying software is a silver bullet regardless of what the mix of fuel we can optimize the generation of electrons and we're seeing this entire industry of electricity being transformer and digital and we call that the electricity Value Network. It's crazy interesting times so big show any big announcements happening here at the show yeah we know lots of big announcements one of the biggest ones is we're just dying day big enterprise wide digital transformation and relationship with Exelon Exelon is the largest utility in North America and they so are 10 million customers but they also generate a lot of power over 35000 megawatts of cross nuclear wind solar hydro gas and you know a year and a half ago we started a journey with them on understanding what the value of vigilance. There is such a believer and we learned a lot working with them as well and now they're deploying our Predix platform the industrial platform and APM which is our asset command and software and our food speed of operations optimization business optimization and cyber across the entire enterprise. >> So it's a big strategic agreement with them and where we're allowed to tell people is that you know a year and a half ago we were talking about what would happen if a wind farm went digital or a power plant. When you don't right now we're talking about what happens an entire utility goes digital or an entire industry of electricity goes digital and leaders like Exelon have the opportunity to create that tipping point in the industry. It does feel like this is the moment I think digital transformation of the electricity industry went real and this is it I presume not everything that they own is jii equipment no software is agnostic. Right. Right so this is really a software deal with their existing infrastructure that probably has a blend of G gear and who knows what other year that are generating. This is no different than how we in Silicon Valley would think about a enterprise software deal. It is the Enterprise subscription deal for them except it's to our cloud and our edge solutions and it's every machine right every single asset whether it's a giant gas turbine or a small little pump every machine has some sense or we will sense the rise or does the environment but all that data is being put into Predix. We will build digital twins of their entire power plants and give them more new insight and help them you know eliminate unplanned downtime and reduce operational costs citing times. We've got to get on buses to get those batteries done right till we get stored where we can we can connect them and optimize them as well. Right. Absolutely. >> I look forward to catching up six months from now and see where you guys are going out fast Bill and you and the team have grown you know from from a little bit of these kind of software skunkworks out there. Yeah I know many people are in San Ramon now. Now I think we're about a hundred people I think we're diversifying I think and it's a great challenge. So when we get the Adsit camping on the horizon. Oh and Sarah will be there. You can hit me up on Twitter again as well if you're interested in working in meaningful purposeful things like energy and the coolest things and software super. All right good. Thanks for stopping by. All right. Thank you. You have been asking us belum Jeffrey. You're watching the queue. We'll be back with our next segment after this short break.

Published Date : Nov 17 2016

SUMMARY :

and the digitization of what was a Thanks for being here. impressions of all of the industries here. But all of the machines that we and the Paul plan is actually and optimize the mix of power from and steam is the big source of power and help them you know eliminate and the coolest things and software

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

SarahPERSON

0.99+

ItalyLOCATION

0.99+

EuropeLOCATION

0.99+

Jeff FrickPERSON

0.99+

San RamonLOCATION

0.99+

50QUANTITY

0.99+

Columbia RiverLOCATION

0.99+

50 percentQUANTITY

0.99+

ChinaLOCATION

0.99+

CaliforniaLOCATION

0.99+

San FranciscoLOCATION

0.99+

PortlandLOCATION

0.99+

USLOCATION

0.99+

North AmericaLOCATION

0.99+

A2AORGANIZATION

0.99+

yesterdayDATE

0.99+

2030DATE

0.99+

IndiaLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

a year and a half agoDATE

0.99+

a year and a half agoDATE

0.99+

GE PowerORGANIZATION

0.99+

360QUANTITY

0.99+

ExelonORGANIZATION

0.99+

CubaLOCATION

0.99+

JeffreyPERSON

0.98+

Three thousand peopleQUANTITY

0.98+

fifth oneQUANTITY

0.98+

10 million customersQUANTITY

0.98+

more than one thirdQUANTITY

0.98+

over 35000QUANTITY

0.98+

fifth yearQUANTITY

0.98+

northern ItalyLOCATION

0.97+

GEORGANIZATION

0.97+

six monthsQUANTITY

0.97+

GE MindsORGANIZATION

0.96+

first impressionsQUANTITY

0.96+

first oneQUANTITY

0.94+

oneQUANTITY

0.94+

I.T.LOCATION

0.94+

BillPERSON

0.94+

Chief Digital OfficerPERSON

0.93+

25 percentQUANTITY

0.92+

#GEMM16EVENT

0.92+

Minds and MachinesEVENT

0.9+

a yearQUANTITY

0.89+

AdsitORGANIZATION

0.89+

nine months ago six months agoDATE

0.88+

Ganesh BellPERSON

0.88+

TrancyOTHER

0.86+

TwitterORGANIZATION

0.86+

twinQUANTITY

0.83+

a dayQUANTITY

0.82+

PredixTITLE

0.81+

two terabytes of dataQUANTITY

0.8+

NetworkORGANIZATION

0.79+

about a hundred peopleQUANTITY

0.75+

Guinness World RecordTITLE

0.71+

last fourDATE

0.66+

five yearsQUANTITY

0.62+

every single assetQUANTITY

0.6+

machineQUANTITY

0.58+

APMORGANIZATION

0.56+

TimefliesPERSON

0.54+

CambridgeORGANIZATION

0.53+

twinsQUANTITY

0.51+

GreensORGANIZATION

0.49+

theCUBEORGANIZATION

0.49+

PredixORGANIZATION

0.47+

#theCUBEORGANIZATION

0.4+

MachinesORGANIZATION

0.3+

Incompressible Encodings


 

>> Hello, my name is Daniel Wichs, I'm a senior scientist at NTT research and a professor at Northeastern University. Today I want to tell you about incompressible encodings. This is a recent work from Crypto 2020 and it's a joint work with Tal Moran. So let me start with a question. How much space would it take to store all of Wikipedia? So it turns out that you can download Wikipedia for offline use and some reasonable version of it is about 50 gigabytes in size. So as you'd expect, it's a lot of data, it's quite large. But there's another way to store Wikipedia which is just to store the link www.wikipedia.org that only takes 17 bytes. And for all intents and purposes as long as you have a connection to the internet storing this link is as good as storing the Wikipedia data. You can access a Wikipedia with this link whenever you want. And the point I want to make is that when it comes to public data like Wikipedia, even though the data is huge, it's trivial to compress it down because it is public just by storing a small link to it. And the question for this talk is, can we come up with an incompressible representation of public data like Wikipedia? In other words can we take Wikipedia and represent it in some way such that this representation requires the full 50 gigabytes of storage store, even for someone who has the link to the underlying Wikipedia data and can get the underlying data for free. So let me actually tell you what this means in more detail. So this is the notion of incompressible encodings that we'll focus on in this work. So incompressible encoding consists of an encoding algorithm and a decoding algorithm, these are public algorithms. There's no secret key. Anybody can run these algorithms. The encoding algorithm takes some data m, let's say the Wikipedia data and encodes it in some probabilistic randomized way to derive a codeword c. And the codeword c, you can think of it as just an alternate representation of the Wikipedia data. Anybody can come and decode the codeword to recover the underlying data m. And the correctness property we want here is that no matter what data you start with, if you encode the data m and then decode it, you get back the original data m. This should hold with probably one over the randomness of the encoding procedure. Now for security, we want to consider an adversary that knows the underlying data m, let's say has a link to Wikipedia and can access the Wikipedia data for free does not pay for storing it. The goal of the adversary is to compress this codeword that we created this new randomized representation of the Wikipedia data. So the adversary consists of two procedures a compression procedure and a decompression procedure. The compression procedure takes its input the codeword c and output some smaller compressed value w and the decompression procedure takes w and its goal is to recover the codeword c. And a security property says that no efficient adversary should be able to succeed in this game with better than negligible property. So there are two parameters of interest in this problem. One is the codeword size, which we'll denote by alpha, and ideally we want the codeword size alpha to be as close as possible to the original data size. In other words we don't want the encoding to add too much overhead to the data. The second parameter is the incompressibility parameter beta and that tells us how much space, how much storage and adversary needs to use in order to store the codeword. And ideally, we want this beta to be as close as possible to the codeword size alpha, which should also be as close as possible to the original data size. So I want to mention that there is a trivial construction of incompressible encodings that achieves very poor parameters. So the trivial construction is just take the data m and add some randomness, concatenate some randomness to it and store the original data m plus the concatenated randomness as the codeword. And now even an adversary that knows the underlying data m cannot compress the randomness. So the incompressibility, so we ensure that this construction is incompressible with incompressibility parameter beta that just corresponds to the size of this randomness we added. So essentially the adversary cannot compress the red part of the codeword. So this gets us a scheme where alpha the size of the codeword, is the original data size m plus the incompressible parameter beta. And it turns out that you cannot do better than this information theoretically. So this is not what we want for this we want to focus on what I will call good incompressible encodings. So here, the codeword size should be as close as possible to the data size, should be just one plus little o one of the data size. And the incompressibility should be as essential as large as the entire codeword the adversary cannot compress the codeword almost at all, the incompressible parameter beta is one minus little o one of the data size or the codeword size. And in essence, what this means is that we're somehow want to take the randomness of the encoding procedure and spread it around in some clever way throughout the codeword in such a way that's impossible for the adversary to separate out the randomness and the data, and only store the randomness and rely on the fact that it can get the data for free. We want to make sure it's impossible that adversary accesses essentially this entire code word which contains both the randomness and data and some carefully intertwined way and cannot compress it down using the fact that it knows the data parts. So this notion of incompressible encodings was defined actually in a prior work of Damgard-Ganesh and Orlandi from crypto 2019. They defined a variant of this notion, they had a different name for it. As a tool or a building block for a more complex cryptographic primitive that they called Proofs of Replicated Storage. And I'm not going to talk about what these are. But in this context of constructing these Proofs of Replicated Storage, they also constructed incompressible encodings albeit with some major caveats. So in particular, their construction relied on the random Oracle models, the heuristic construction and it was not known whether you could do this in the standard model, the encoding and decoding time of the construction was quadratic in the data size. And in particular, here we want to apply this, we want to use these types of incompressible encodings on fairly large data like Wikipedia data, 50 gigabytes in size. So quadratic runtime on such huge data is really impractical. And lastly the proof of security for their construction was flawed or someone incompleted, didn't consider general adversaries. And the slope was actually also noticed by concurrent work of Garg-Lu and Waters. And they managed to give a fixed proof for this construction but this required actually quite a lot of effort. It was a highly non-trivial and subtle proof to proof the original construction of Damgard-Ganesh and Orlandi secure. So in our work, we give a new construction of these types of incompressible encodings, our construction already achieved some form of security in the Common Reference String Model come Random String Model without the use of Random Oracles. We have a linear encoding time, linear in the data size. So we get rid of the quadratic and we have a fairly simple proof of security. In fact, I'm hoping to show you a slightly simplified form of it and the stock. We also give some lower bounds and negative results showing that our construction is optimal in some aspects and lastly we give a new application of this notion of incompressible encodings to something called big-key cryptography. And so I want to tell you about this application, hopefully it'll give you some intuition about why incompressible encodings are interesting and useful, and also some intuition about what their real goal is or what it is that they're trying to achieve. So, the application of big-key cryptography is concerned with the problem of system compromise. So, a computer system can become compromised either because the user downloads a malware or remote attacker manages to hack into it. And when this happens, the remote attacker gains control over the system and any cryptographic keys that are stored on the system can easily be exfiltrated or just downloaded out of the system by the attacker and therefore, any security that these cryptographic keys were meant to provide is going to be completely lost. And the idea of big-key cryptography is to mitigate against such attacks by making the secret keys intentionally huge on the order of many gigabytes to even terabytes. And the idea is that by having a very large secret key it would make it harder to exfiltrate such a secret key. Either because the adversary's bandwidth to the compromised system is just not large enough to exfiltrate such a large key or because it might not be cost-effective to have to download so much data of compromised system and store so much data to be able to use the key in the future, especially if the attacker wants to do this on some mass scale or because the system might have some other mechanisms let's say firewall that would detect such large amounts of leakage out of the compromised system and block it in some way. So there's been a lot of work on this idea building big-key crypto systems. So crypto systems where the secret key can be set arbitrarily huge and these crypto systems should testify two goals. So one is security, security should hold even if a large amount of data about the secret key is out, as long as it's not the entire secret key. So when you have an attacker download let's say 90% of the data of the secret key, the security of the system should be preserved. And the second property is that even though the secret key of the system can be huge, many gigabytes or terabytes, we still want the crypto system to remain efficient even though the secret is huge. And particularly this means that the crypto system can even read the entire secret key during each cryptographic operation because that would already be too inefficient. So it can only read some small number of bits of the secret key during each operation, then it performs. And so there's been a lot of work constructing these types of crypto systems but one common problem for all these works is that they require the user to waste a lot of their storage the storage on their computer in storing this huge secret key which is useless for any other purpose, other than providing security. And users might not want to do this. So that's the problem that we address here. And the new idea in our work is let's make the secret key useful instead of just having a secret key with some useless, random data that the cryptographic scheme picks, let's have a secret key that stores let's say the Wikipedia data at which a user might want to store in their system anyway or the user's movie collection or music collection et cetera and the data that the user would want to store on their system. Anyway, we want to convert it. We want to use that as the secret key. Now we think about this for a few seconds. Well, is it a good idea to use Wikipedia as a secret key? No, that sounds like a terrible idea. Wikipedia is not secret, it's public, it's online, Anyone can access it whenever they want. So it's not what we're suggesting. We're suggesting to use an incompressible encoding of Wikipedia as a secret key. Now, even though Wikipedia is public the incompressible encoding is randomized. And therefore the accuracy does not know the value of this incompressible encoding. Moreover, because it's incompressible in order for the adversary to steal, to exfiltrate the entire secret key, it would have to download a very large amount of data out of the compromised system. So there's some hope that this could provide security and we show how to build public encryption schemes and the setting that make use of a secret key which is an incompressible coding of some useful data like Wikipedia. So the secret key is an incompressible encoding of useful data and security ensures that the adversary will need to exfiltrate almost entire key to break the security of this critical system. So in the last few minutes, let me give you a very brief overview of our construction of incompressible encodings. And for this part, we're going to pretend we have something a real beautiful cryptographic object called Lossy Trapdoor Permutations. It turns out we don't quite have an object that's this beautiful and in the full construction, we relax this notion somewhat in order to be able to get our full construction. So Lossy Trapdoor Permutation is a function f we just key by some public key pk and it maps end bits to end bits. And we can sample the public key in one of two indistinguishable modes. In injective mode, this function of fPK is a permutation, and there's in fact, a trapdoor that allows us to invert it efficiently. And in the Lossy mode, if we sample the public in Lossy mode, then if we take some value, random value x and give you fpk of x, then this loses a lot of information about x. And in particular, the image size of the function is very small, much smaller than two to the n and so fpk of x does not contain all the information about x. Okay, so using this type of Lossy Trapdoor Permutation, here's the encoding of a message m using long random CRS come random string. So the encoding just consists of sampling the public key of this Lossy Trapdoor Permutation in injected mode, along with the trapdoor. And the encoding is just going to take the message m, x over it with a common reference string, come random string and invert the trapdoor permutation on this value. And then Coding will just be the public key and the inverse x. So this is something anybody can decode by just taking fpk of x, x over it with the CRS. And that will recover the original message. Now, to add the security, we're going to in the proof, we're going to switch to choosing the value x uniformly at random. So the x component of the codeword is going to be chosen uniformly random and we're going to set the CRS to be fpk of x, x over the message. And if you look at it for a second this distribution is exactly equivalent. It's just a different way of sampling the exact same distribution. And in particular, the relation between the CRS and X is preserved. Now in the second step, we're going to switch the public key to Lossy mode. And now when we do this, then the Codeword part, sorry then the CRS fpk of x, x over m only leaks some small amount of information about the random value x. In other words, even if that resists these, the CRS then the value x and the codeword has a lot of entropy. And because it has a lot of entropy it's incompressible. So what we did here is that we actually start to show that the code word and the CRS are indistinguishable from a different way of sampling them where we placed information about the message and the CRS and the codeword actually is truly random, has a lot of real entropy. And therefore even given the CRS the Codeword is incompressible that's the main idea behind the proof. I just want to make two remarks, our full constructions rely on a relaxed notion of Lossy Trapdoor Permutations which we're able to construct from either the decisional residuoisity or the learning with errors assumption. So in particular, we don't actually know how to construct trapdoor permutations from LWE from any postquantum assumption but the relaxed notion that we need for our actual construction, we can achieve from post quantum assumptions that get post quantum security. I want to mention two caveats of the construction. So one is that in order to make this work, the CRS needs to be long essentially as long as the message size. And also this construction achieves a weak form of selective security where the adversary decides to choose the message before seeing the CRS. And we show that both of these caveats are inherent. We show this by black-box separation and one can overcome them only in the random oracle model. Unless I want to just end with an interesting open question. I think one of the most interesting open questions in this area all of the constructions of incompressible encodings from our work and prior work required the use of some public key crypto assumptions some sort of trapdoor permutations or trapdoor functions. And one of the interesting open question is can you construct and incompressible encodings without relying on public key crypto, using one way functions or just the random oracle model. We conjecture this is not possible, but we don't know. So I want to end with that open questions and thank you very much for listening.

Published Date : Sep 21 2020

SUMMARY :

in order for the adversary to steal,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Daniel WichsPERSON

0.99+

second stepQUANTITY

0.99+

NTTORGANIZATION

0.99+

two caveatsQUANTITY

0.99+

17 bytesQUANTITY

0.99+

50 gigabytesQUANTITY

0.99+

two remarksQUANTITY

0.99+

bothQUANTITY

0.99+

two proceduresQUANTITY

0.99+

WikipediaORGANIZATION

0.99+

www.wikipedia.orgOTHER

0.99+

two goalsQUANTITY

0.99+

second parameterQUANTITY

0.99+

second propertyQUANTITY

0.99+

each operationQUANTITY

0.99+

two parametersQUANTITY

0.98+

oneQUANTITY

0.98+

OrlandiPERSON

0.98+

Tal MoranPERSON

0.97+

TodayDATE

0.97+

one common problemQUANTITY

0.97+

OneQUANTITY

0.97+

Garg-LuORGANIZATION

0.96+

Damgard-GaneshPERSON

0.96+

Northeastern UniversityORGANIZATION

0.96+

twoQUANTITY

0.95+

two indistinguishable modesQUANTITY

0.94+

Crypto 2020ORGANIZATION

0.94+

about 50 gigabytesQUANTITY

0.94+

each cryptographicQUANTITY

0.94+

CRSORGANIZATION

0.94+

WikipediaTITLE

0.93+

90% of the dataQUANTITY

0.89+

LWEORGANIZATION

0.89+

OracleORGANIZATION

0.84+

terabytesQUANTITY

0.83+

WatersORGANIZATION

0.79+

one wayQUANTITY

0.77+

secondsQUANTITY

0.74+

Lossy TrapdoorOTHER

0.71+

Proofs of Replicated StorageOTHER

0.64+

2019DATE

0.62+

secondQUANTITY

0.56+

muchQUANTITY

0.55+

lot ofQUANTITY

0.54+

caveatsQUANTITY

0.51+

gigabytesQUANTITY

0.48+

cryptoTITLE

0.33+