Image Title

Search Results for Ellen Komarovsky:

SPARKs: Succinct Parallelizable Arguments of Knowledge


 

>>Hello, everyone. Welcome to Entities Summit. My name is Ellen Komarovsky and I will talk about sparks So simple realizable arguments of knowledge. This talk is based on the joint work No, me, Frank, Cody, Freytag and Raphael past. Let me start by telling you what's the same documents are that's the same argument is a special type of interactive protocol between the prove prove er and the verifier who share some instance X, >>which is allegedly in some language. And the goal of the protocol is for the proper toe convince the very far that access indeed in the language for completeness, the guarantees that their guarantees that if X is indeed in the language, the verifier will in the end of the protocol indeed be convinced. On the other hand, for sadness we require that if X is not in the language, that no matter what the proper does, as long as it is bounded to run in polynomial time, the verifier will not be convinced. There is a stronger notion of sadness called an argument of knowledge, which says that the only way for the approval to continue the verifier is by knowing some witness there is a mathematical way to formalize this notion, but I will not get into it for efficiency. And what makes this protocol succinct is that we require the very far is running time and communication complexity between the program, the verifier Toby, both mounted by some political written function in T, where T is the time to verify the empty statement. In terms of the proof is running time, we don't require anything except that it's, for example, in normality. The goal of this work is to improve this polygonal overhead of the prove er, to explain why this is an important task. Let me give you a motivating example, which is just the concept of delegation of computation. So considering some small device, like a laptop or smartphone, that we used to perform some complicated computation which it cannot do. Since it is a big device, it wishes to delegate the computation to some service or cloud to perform the computation for it. Since the small device does not fully trust the service, it may want to ask the device the service to also issue a proof for correctness of the computation. And the problem is that if the proof it takes much more time than just performing the computation. It's not clear that this is something that will be useful in practice thinking. Think off an overhead, which is square of the time it takes to perform the computation. This will become, very quickly a very big number or very, very large delay for generating the We're not the >>first to study this problem. It has been studied for several decades, and at least from a theoretical point of view, the problem is almost solved or essentially solved. We have constructions off argument systems is great overhead, just bottle of arrhythmic multiplicity of overhead. This is obtained by combining efficient disappears. Together with Killian's arguments is there's a >>huge open problem in complexity. Theory of constructing PCP is with constant over namely, running just in linear time in the running, in the running time off just running the computation. But we argued that even if we had such a PCP and the constant was great, let's say it was just too. This would already be too much, because if you delegate the computation to takes a month toe complete, then waiting another month just for the proof might not be so reasonable. There is a solution in the literature for this problem in what we call using what we call a reliable PCP medicine. And I'll show that there is a recipe construction that has the following very useful property. Once you perform the computation itself without the just the computation and write down the computation to blow, then there is the way to generate every simple off the PCP in just only logarithmic time. So this means that you can, in parallel after computing the function itself, you can empire led, compute the whole PCP in just falling over it. Next time this gives you this gives us a great argument system with just t plus Polly locked parallel time instead of three times for luck tea time. But for this we need about the process service, which is prohibitively large. This is where sparks come in. We introduced the notion, or the paradigm off, computing the proof in part to the computation, not after the computation is done slightly more formally. What spark is it's just a succinct argument of knowledge, like what we said before, with the very fired and communication of Leslie being small but now we also require approval for which is super efficient. Namely, it can be paralyzed able. And it has to finish the proof together with the computation in Time T plus volatility, which essentially the best you can hope for. And we want to prefer to do so only with political rhythmic number off processors. You can also extend the definition to handling computations, which are to begin with a paralyze herbal. But I will not touch upon this. In the stock, you can see the paper. For the >>girls, we have two main results. The first main result is the construction of an interactive spark. It's just four rounds, and it is assumes Onley collisions is not hash functions. The second result is a non interactive spark. This result also assumes career resistant hash functions and in addition, the existence off any snark and namely succinct, non interactive argument of college that does not have to be a super efficient in terms of programming time. Slightly more generally, the two theories follow from >>combined framework, which takes essentially any argument of knowledge and turns it into a spark by assuming on a collision system, hash functions and maybe the multi behind the construction could be viewed as a trade off between computation time and process. Source. Winston. She ate theorem one using Killings protocol, which is an argument of knowledge, which is a four round argument of knowledge. And we insensate you're into using its not which is an argument knowledge. Just by definition, let me tell you what are the main ideas underlying our construction before telling you to control the ideas. Let me make some simplifying assumptions. The first assumption I will only be talking about the non interactive regime. The second example assumption is that I'm going to assume snark, which is a non interactive 16 argument of knowledge. And then we'll assume that's not the snark which is super efficient. So it will consumed other time to t for computation that takes 20 so almost what we want, but just not yet, not not yet there. I will assume that the computation that we want to perform a sequential and additionally I will assume that the computation has no >>space, namely its ah, or it has very low space. So think about the sequential computation, which MM doesn't have a lot of space or even zero for the for the time being, I would like to discuss how to simplify, how to remove this simplifying assumptions. So the starting idea is based on two works off a nettle and duckling. It'll from a couple of years ago. And here's how it works. So >>remember, we want toe performative time. Computation generated proof and we need to finish roughly by time. T. So the idea is to run half of the computation, which is what we can afford because we have a snark that can generate a proof in additional to over two steps so we can run the complete half of the computation and prove that half of the computation all in time T. And the idea is that now we can recursive Lee computer improve the rest of the computation in Parliament. Here's how it looks like. So you run half of the computation, started proof, and then you run a quarter of the remaining half of the remaining computation, which is a quarter of the original one, and prove it. And in parallel again, you take another eighth of the computation, which is one half of what's left and so on. And so forth. As you can see, that eventually will finish the whole computation. And you only need something like logarithmic Lee. Many parallel processors and the communication complexity and verifies running time only grow by algorithmic >>factor. So this is the main idea. Let's go back to the simplifying assumptions we have. So the first one was that I'm only gonna talk about the new interactive regime. You have to believe me that the same ideas extend to the interactive case, which is a little bit more massive with notation. But the ideas extent so I will not talk about it anymore. The second assumption I had was that I have a super efficient start, so it had over had two T >>40 time computation again. You have to believe me that if you work out the math, then the ideas extend to starts with quasi linear overhead. Namely, starts that working time tee times, Polly locked e and then the result extends to any snark because of a result because of a previous work will be tense. Kettle, who showed that a snark with the proof it runs in polynomial time can be generically translated into a snark where the programs in quasi linear with quasi linear overhead. So this gives a result from any stark not only from pretty efficient starts. The last bullet was about the fact that we're dealing with only with sequential Ram computations. And again, you have to believe me that the ideas can be extended toe tyrants And the last assumption which is the focus of this work is how to get rid of the small space assumption. This is what I'm gonna be talking next. Let's see what goes wrong. If the if the computation has space, remember what we did in the previous. In a couple of slides ago, the construction was toe perform. Half of the computation prove it and then half of the remaining computation prove it. And >>so on. If you write down the statement that each of these proofs proofs, it's something like that a machine m on input X executed for some number of steps starting from some state ended at some other state. And if you notice the statement itself depends on the space of the computation, well and therefore, if the space of the computation is nontrivial, the statements are large and therefore the communication will be large and therefore the very fire will have toe be running time, proportional to the space and so on. So we don't even get a saint argument if we do it. Neighborly. Here's a solution for this problem. You can say, Well, you don't have to include the space in the whole space. In the statement, you can include only a digest of the space. Think about some hash function of the space. So indeed, you can modify the statement to not include the space, but only a digest. And now the statement will be a little bit more complicated. It will be that there exists some initial state end state such that there hush is consistent with digest in the statement. And if you run the machine M for K state and for K steps starting from the initial space, you end up with the final space. So this is great. It indeed solves the communication complexity problem in the very far complexity problem. But notice that from the proof for site, we didn't actually do anything because we just move, pushed the complexity in tow. The weakness. So the proof is running. Time is still very large with this solution. Yeah. Our final solution, if in a very high level, is to compress the witness. So instead of using the whole space is the witness. We will be using the computation itself in the computation that we ran as the witness. So now the statement will be off the same form, so it will still be. It will still consist off to digests and machine. But now the the witness will be not the whole state. But it will be the case steps that we performed. Namely, it will be that there exists case steps that I performed such that if I run >>the machine m on this case steps and I started with a digest and I just start and I applied this case steps on the digest. I will end up with the Final Digest. In order to implement this, we need some sort off a nap. Datable digest. This is not really hard, not so hard to obtain because you could just do something like a miracle tree. It's not hard to see that you can add the locations in the medical tree quite efficiently. But the problem is that we need toe toe to compute those updates. Not only not only we need toe be ableto update the hash browns, the hush or the largest which don't also be able to compute the updates in parallel to the computation. And to this end, we introduce a variant of Merkle trees and show how to perform all of those updates level by level in the in the Merkel tree in a pipeline in fashion. So namely, we push the updates off the digest in toe the Merkel tree, one after the other without waiting for the previous ones to end. And here we're using the tree structure off Merkle trees. So that's all I'm gonna say about the protocol. I'm just gonna end with showing you how the final protocol looks like We run case steps of computations. Okay, one steps of computation and we compute the K updates for those case steps in violent the computation. So every time we run a step of computation, we also update start an update off our digest. And once we are finished computing all the updates, we can start running a proof using those updates as witness and were forcibly continuing this way as a conclusion this results with the spark namely 1/16 argument system with the proof is running Time t plus for you Look, team and no times and all we need is something like quality of arrhythmic number of processors. E would like to mention that this is a theoretical result and by no means should be should be taken as a za practical thing that should be implemented. But I think that it is important to work on it. And there is a lot of interesting questions on how to make this really practical and useful. So with that, I'm gonna end and thank you so much for inviting me and enjoy the sandwich.

Published Date : Sep 24 2020

SUMMARY :

protocol between the prove prove er and the verifier who share some instance X, In terms of the proof is running time, we don't require anything except that it's, for example, first to study this problem. extend the definition to handling computations, which are to begin with a and in addition, the existence off any snark and namely succinct, is that I'm going to assume snark, which is a non interactive 16 argument So the starting idea is based on two works off a nettle and duckling. remaining half of the remaining computation, which is a quarter of the original one, and prove But the ideas extent so I will not talk about it anymore. out the math, then the ideas extend to starts with quasi linear overhead. But notice that from the proof for site, we didn't actually do anything because we just But the problem is that we need toe toe to compute those updates.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ellen KomarovskyPERSON

0.99+

WinstonPERSON

0.99+

KillianPERSON

0.99+

KettlePERSON

0.99+

20QUANTITY

0.99+

two theoriesQUANTITY

0.99+

RaphaelPERSON

0.99+

FrankPERSON

0.99+

firstQUANTITY

0.99+

FreytagPERSON

0.99+

twoQUANTITY

0.99+

LesliePERSON

0.99+

PollyPERSON

0.99+

second assumptionQUANTITY

0.99+

first oneQUANTITY

0.99+

CodyPERSON

0.99+

four roundsQUANTITY

0.99+

eighthQUANTITY

0.98+

three timesQUANTITY

0.98+

zeroQUANTITY

0.98+

LeePERSON

0.98+

second resultQUANTITY

0.98+

eachQUANTITY

0.97+

four roundQUANTITY

0.97+

bothQUANTITY

0.96+

two main resultsQUANTITY

0.96+

one stepsQUANTITY

0.94+

over two stepsQUANTITY

0.93+

halfQUANTITY

0.91+

16QUANTITY

0.91+

HalfQUANTITY

0.91+

second exampleQUANTITY

0.9+

a monthQUANTITY

0.88+

MerkleOTHER

0.87+

couple of years agoDATE

0.83+

EntitiesEVENT

0.82+

one halfQUANTITY

0.79+

two TQUANTITY

0.77+

first main resultQUANTITY

0.76+

half ofQUANTITY

0.76+

40 timeQUANTITY

0.74+

oneQUANTITY

0.72+

1/16QUANTITY

0.68+

OnleyPERSON

0.62+

couple ofDATE

0.6+

SummitORGANIZATION

0.48+

several decadesQUANTITY

0.47+