Kelly Hoang, Gilead | WiDS 2023
(upbeat music) >> Welcome back to The Cubes coverage of WIDS 2023 the eighth Annual Women in Data Science Conference which is held at Stanford University. I'm your host, Lisa Martin. I'm really excited to be having some great co-hosts today. I've got Hannah Freytag with me, who is a data journalism master student at Stanford. We have yet another inspiring woman in technology to bring to you today. Kelly Hoang joins us, data scientist at Gilead. It's so great to have you, Kelly. >> Hi, thank you for having me today. I'm super excited to be here and share my journey with you guys. >> Let's talk about that journey. You recently got your PhD in information sciences, congratulations. >> Thank you. Yes, I just graduated, I completed my PhD in information sciences from University of Illinois Urbana-Champaign. And right now I moved to Bay Area and started my career as a data scientist at Gilead. >> And you're in better climate. Well, we do get snow here. >> Kelly: That's true. >> We proved that the last... And data science can show us all the climate change that's going on here. >> That's true. That's the topic of the data fund this year, right? To understand the changes in the climate. >> Yeah. Talk a little bit about your background. You were mentioning before we went live that you come from a whole family of STEM students. So you had that kind of in your DNA. >> Well, I consider myself maybe I was a lucky case. I did grew up in a family in the STEM environment. My dad actually was a professor in computer science. So I remember when I was at a very young age, I already see like datas, all of these computer science concepts. So grew up to be a data scientist is always something like in my mind. >> You aspired to be. >> Yes. >> I love that. >> So I consider myself in a lucky place in that way. But also, like during this journey to become a data scientist you need to navigate yourself too, right? Like you have this roots, like this foundation but then you still need to kind of like figure out yourself what is it? Is it really the career that you want to pursue? But I'm happy that I'm end up here today and where I am right now. >> Oh, we're happy to have you. >> Yeah. So you' re with Gilead now after you're completing your PhD. And were you always interested in the intersection of data science and health, or is that something you explored throughout your studies? >> Oh, that's an excellent question. So I did have background in computer science but I only really get into biomedical domain when I did my PhD at school. So my research during my PhD was natural language processing, NLP and machine learning and their applications in biomedical domains. And then when I graduated, I got my first job in Gilead Science. Is super, super close and super relevant to what my research at school. And at Gilead, I am working in the advanced analytics department, and our focus is to bring artificial intelligence and machine learning into supporting clinical decision making. And really the ultimate goal is how to use AI to accelerate the precision medicine. So yes, it's something very like... I'm very lucky to get the first job that which is very close to my research at school. >> That's outstanding. You know, when we talk about AI, we can't not talk about ethics, bias. >> Kelly: Right. >> We know there's (crosstalk) Yes. >> Kelly: In healthcare. >> Exactly. Exactly. Equities in healthcare, equities in so many things. Talk a little bit about what excites you about AI, what you're doing at Gilead to really influence... I mean this, we're talking about something that's influencing life and death situations. >> Kelly: Right. >> How are you using AI in a way that is really maximizing the opportunities that AI can bring and maximizing the value in the data, but helping to dial down some of the challenges that come with AI? >> Yep. So as you may know already with the digitalization of medical records, this is nowaday, we have a tremendous opportunities to fulfill the dream of precision medicine. And what I mean by precision medicines, means now the treatments for people can be really tailored to individual patients depending on their own like characteristic or demographic or whatever. And nature language processing and machine learning, and AI in general really play a key role in that innovation, right? Because like there's a vast amount of information of patients and patient journeys or patient treatment is conducted and recorded in text. So that's why our group was established. Actually our department, advanced analytic department in Gilead is pretty new. We established our department last year. >> Oh wow. >> But really our mission is to bring AI into this field because we see the opportunity now. We have a vast amount of data about patient about their treatments, how we can mine these data how we can understand and tailor the treatment to individuals. And give everyone better care. >> I love that you brought up precision medicine. You know, I always think, if I kind of abstract everything, technology, data, connectivity, we have this expectation in our consumer lives. We can get anything we want. Not only can we get anything we want but we expect whoever we're engaging with, whether it's Amazon or Uber or Netflix to know enough about me to get me that precise next step. I don't think about precision medicine but you bring up such a great point. We expect these tailored experiences in our personal lives. Why not expect that in medicine as well? And have a tailored treatment plan based on whatever you have, based on data, your genetics, and being able to use NLP, machine learning and AI to drive that is really exciting. >> Yeah. You recap it very well, but then you also bring up a good point about the challenges to bring AI into this field right? Definitely this is an emerging field, but also very challenging because we talk about human health. We are doing the work that have direct impact to human health. So everything need to be... Whatever model, machine learning model that you are building, developing you need to be precise. It need to be evaluated properly before like using as a product, apply into the real practice. So it's not like recommendation systems for shopping or anything like that. We're talking about our actual health. So yes, it's challenging that way. >> Yeah. With that, you already answered one of the next questions I had because like medical data and health data is very sensitive. And how you at Gilead, you know, try to protect this data to protect like the human beings, you know, who are the data in the end. >> The security aspect is critical. You bring up a great point about sensitive data. We think of healthcare as sensitive data. Or PII if you're doing a bank transaction. We have to be so careful with that. Where is security, data security, in your everyday work practices within data science? Is it... I imagine it's a fundamental piece. >> Yes, for sure. We at Gilead, for sure, in data science organization we have like intensive trainings for employees about data privacy and security, how you use the data. But then also at the same time, when we work directly with dataset, it's not that we have like direct information about patient at like very granular level. Everything is need to be kind of like anonymized at some points to protect patient privacy. So we do have rules, policies to follow to put that in place in our organization. >> Very much needed. So some of the conversations we heard, were you able to hear the keynote this morning? >> Yes. I did. I attended. Like I listened to all of them. >> Isn't it fantastic? >> Yes, yes. Especially hearing these women from different backgrounds, at different level of their professional life, sharing their journeys. It's really inspiring. >> And Hannah, and I've been talking about, a lot of those journeys look like this. >> I know >> You just kind of go... It's very... Yours is linear, but you're kind of the exception. >> Yeah, this is why I consider my case as I was lucky to grow up in STEM environment. But then again, back to my point at the beginning, sometimes you need to navigate yourself too. Like I did mention about, I did my pa... Sorry, my bachelor degree in Vietnam, in STEM and in computer science. And that time, there's only five girls in a class of 100 students. So I was not the smartest person in the room. And I kept my minority in that areas, right? So at some point I asked myself like, "Huh, I don't know. Is this really my careers." It seems that others, like male people or students, they did better than me. But then you kind of like, I always have this passion of datas. So you just like navigate yourself, keep pushing yourself over those journey. And like being where I am right now. >> And look what you've accomplished. >> Thank you. >> Yeah. That's very inspiring. And yeah, you mentioned how you were in the classroom and you were only one of the few women in the room. And what inspired or motivated you to keep going, even though sometimes you were at these points where you're like, "Okay, is this the right thing?" "Is this the right thing for me?" What motivated you to keep going? >> Well, I think personally for me, as a data scientist or for woman working in data science in general, I always try to find a good story from data. Like it's not, when you have a data set, well it's important for you to come up with methodologies, what are you going to do with the dataset? But I think it's even more important to kind of like getting the context of the dataset. Like think about it like what is the story behind this dataset? What is the thing that you can get out of it and what is the meaning behind? How can we use it to help use it in a useful way. To have in some certain use case. So I always have that like curiosity and encouragement in myself. Like every time someone handed me a data set, I always think about that. So it's helped me to like build up this kind of like passion for me. And then yeah. And then become a data scientist. >> So you had that internal drive. I think it's in your DNA as well. When you were one of five. You were 5% women in your computer science undergrad in Vietnam. Yet as Hannah was asking you, you found a lot of motivation from within. You embrace that, which is so key. When we look at some of the statistics, speaking of data, of women in technical roles. We've seen it hover around 25% the last few years, probably five to 10. I was reading some data from anitab.org over the weekend, and it shows that it's now, in 2022, the number of women in technical roles rose slightly, but it rose, 27.6%. So we're seeing the needle move slowly. But one of the challenges that still remains is attrition. Women who are leaving the role. You've got your PhD. You have a 10 month old, you've got more than one child. What would you advise to women who might be at that crossroads of not knowing should I continue my career in climbing the ladder, or do I just go be with my family or do something else? What's your advice to them in terms of staying the path? >> I think it's really down to that you need to follow your passion. Like in any kind of job, not only like in data science right? If you want to be a baker, or you want to be a chef, or you want to be a software engineer. It's really like you need to ask yourself is it something that you're really passionate about? Because if you really passionate about something, regardless how difficult it is, like regardless like you have so many kids to take care of, you have the whole family to take care of. You have this and that. You still can find your time to spend on it. So it's really like let yourself drive your own passion. Drive the way where you leading to. I guess that's my advice. >> Kind of like following your own North Star, right? Is what you're suggesting. >> Yeah. >> What role have mentors played in your career path, to where you are now? Have you had mentors on the way or people who inspired you? >> Well, I did. I certainly met quite a lot of women who inspired me during my journey. But right now, at this moment, one person, particular person that I just popped into my mind is my current manager. She's also data scientist. She's originally from Caribbean and then came to the US, did her PhDs too, and now led a group, all women. So believe it or not, I am in a group of all women working in data science. So she's really like someone inspire me a lot, like someone I look up to in this career. >> I love that. You went from being one of five females in a class of 100, to now having a PhD in information sciences, and being on an all female data science team. That's pretty cool. >> It's great. Yeah, it's great. And then you see how fascinating that, how things shift right? And now today we are here in a conference that all are women in data science. >> Yeah. >> It's extraordinary. >> So this year we're fortunate to have WIDS coincide this year with the actual International Women's Day, March 8th which is so exciting. Which is always around this time of year, but it's great to have it on the day. The theme of this International Women's Day this year is embrace equity. When you think of that theme, and your career path, and what you're doing now, and who inspires you, how can companies like Gilead benefit from embracing equity? What are your thoughts on that as a theme? >> So I feel like I'm very lucky to get my first job at Gilead. Not only because the work that we are doing here very close to my research at school, but also because of the working environment at Gilead. Inclusion actually is one of the five core values of Gilead. >> Nice. >> So by that, we means we try to create and creating a working environment that all of the differences are valued. Like regardless your background, your gender. So at Gilead, we have women at Gilead which is a global network of female employees, that help us to strengthen our inclusion culture, and also to influence our voices into the company cultural company policy and practice. So yeah, I'm very lucky to work in the environment nowadays. >> It's impressive to not only hear that you're on an all female data science team, but what Gilead is doing and the actions they're taking. It's one thing, we've talked about this Hannah, for companies, and regardless of industry, to say we're going to have 50% women in our workforce by 2030, 2035, 2040. It's a whole other ballgame for companies like Gilead to actually be putting pen to paper. To actually be creating a strategy that they're executing on. That's awesome. And it must feel good to be a part of a company who's really adapting its culture to be more inclusive, because there's so much value that comes from inclusivity, thought diversity, that ultimately will help Gilead produce better products and services. >> Yeah. Yes. Yeah. Actually this here is the first year Gilead is a sponsor of the WIDS Conference. And we are so excited to establish this relationship, and looking forward to like having more collaboration with WIDS in the future. >> Excellent. Kelly we've had such a pleasure having you on the program. Thank you for sharing your linear path. You are definitely a unicorn. We appreciate your insights and your advice to those who might be navigating similar situations. Thank you for being on theCUBE today. >> Thank you so much for having me. >> Oh, it was our pleasure. For our guests, and Hannah Freytag this is Lisa Martin from theCUBE. Coming to you from WIDS 2023, the eighth annual conference. Stick around. Our final guest joins us in just a minute.
SUMMARY :
in technology to bring to you today. and share my journey with you guys. You recently got your PhD And right now I moved to Bay Area And you're in better climate. We proved that the last... That's the topic of the So you had that kind of in your DNA. in the STEM environment. that you want to pursue? or is that something you and our focus is to bring we can't not talk about ethics, bias. what excites you about AI, really tailored to individual patients to bring AI into this field I love that you brought about the challenges to bring And how you at Gilead, you know, We have to be so careful with that. Everything is need to be So some of the conversations we heard, Like I listened to all of them. at different level of And Hannah, and I've kind of the exception. So you just like navigate yourself, And yeah, you mentioned how So it's helped me to like build up So you had that internal drive. I think it's really down to that you Kind of like following and then came to the US, five females in a class of 100, And then you see how fascinating that, but it's great to have it on the day. but also because of the So at Gilead, we have women at Gilead And it must feel good to be a part and looking forward to like Thank you for sharing your linear path. Coming to you from WIDS 2023,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kelly | PERSON | 0.99+ |
Kelly Hoang | PERSON | 0.99+ |
Hannah Freytag | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Hannah | PERSON | 0.99+ |
Caribbean | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Vietnam | LOCATION | 0.99+ |
Gilead | ORGANIZATION | 0.99+ |
2030 | DATE | 0.99+ |
2035 | DATE | 0.99+ |
2022 | DATE | 0.99+ |
2040 | DATE | 0.99+ |
Bay Area | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
27.6% | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
5% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
WIDS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
five girls | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first job | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
100 students | QUANTITY | 0.99+ |
March 8th | DATE | 0.99+ |
more than one child | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
International Women's Day | EVENT | 0.98+ |
five core | QUANTITY | 0.98+ |
Gilead Science | ORGANIZATION | 0.98+ |
10 | QUANTITY | 0.98+ |
one person | QUANTITY | 0.98+ |
eighth Annual Women in Data Science Conference | EVENT | 0.97+ |
five females | QUANTITY | 0.97+ |
University of Illinois Urbana-Champaign | ORGANIZATION | 0.97+ |
10 month old | QUANTITY | 0.96+ |
North Star | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.93+ |
first year | QUANTITY | 0.93+ |
The Cubes | ORGANIZATION | 0.93+ |
around 25% | QUANTITY | 0.91+ |
one thing | QUANTITY | 0.89+ |
WIDS 2023 | EVENT | 0.88+ |
WIDS | EVENT | 0.88+ |
this morning | DATE | 0.88+ |
anitab.org | OTHER | 0.86+ |
Gilead | PERSON | 0.86+ |
Stanford | ORGANIZATION | 0.85+ |
100 | QUANTITY | 0.79+ |
Stanford University | LOCATION | 0.79+ |
eighth annual conference | QUANTITY | 0.78+ |
SPARKs: Succinct Parallelizable Arguments of Knowledge
>>Hello, everyone. Welcome to Entities Summit. My name is Ellen Komarovsky and I will talk about sparks So simple realizable arguments of knowledge. This talk is based on the joint work No, me, Frank, Cody, Freytag and Raphael past. Let me start by telling you what's the same documents are that's the same argument is a special type of interactive protocol between the prove prove er and the verifier who share some instance X, >>which is allegedly in some language. And the goal of the protocol is for the proper toe convince the very far that access indeed in the language for completeness, the guarantees that their guarantees that if X is indeed in the language, the verifier will in the end of the protocol indeed be convinced. On the other hand, for sadness we require that if X is not in the language, that no matter what the proper does, as long as it is bounded to run in polynomial time, the verifier will not be convinced. There is a stronger notion of sadness called an argument of knowledge, which says that the only way for the approval to continue the verifier is by knowing some witness there is a mathematical way to formalize this notion, but I will not get into it for efficiency. And what makes this protocol succinct is that we require the very far is running time and communication complexity between the program, the verifier Toby, both mounted by some political written function in T, where T is the time to verify the empty statement. In terms of the proof is running time, we don't require anything except that it's, for example, in normality. The goal of this work is to improve this polygonal overhead of the prove er, to explain why this is an important task. Let me give you a motivating example, which is just the concept of delegation of computation. So considering some small device, like a laptop or smartphone, that we used to perform some complicated computation which it cannot do. Since it is a big device, it wishes to delegate the computation to some service or cloud to perform the computation for it. Since the small device does not fully trust the service, it may want to ask the device the service to also issue a proof for correctness of the computation. And the problem is that if the proof it takes much more time than just performing the computation. It's not clear that this is something that will be useful in practice thinking. Think off an overhead, which is square of the time it takes to perform the computation. This will become, very quickly a very big number or very, very large delay for generating the We're not the >>first to study this problem. It has been studied for several decades, and at least from a theoretical point of view, the problem is almost solved or essentially solved. We have constructions off argument systems is great overhead, just bottle of arrhythmic multiplicity of overhead. This is obtained by combining efficient disappears. Together with Killian's arguments is there's a >>huge open problem in complexity. Theory of constructing PCP is with constant over namely, running just in linear time in the running, in the running time off just running the computation. But we argued that even if we had such a PCP and the constant was great, let's say it was just too. This would already be too much, because if you delegate the computation to takes a month toe complete, then waiting another month just for the proof might not be so reasonable. There is a solution in the literature for this problem in what we call using what we call a reliable PCP medicine. And I'll show that there is a recipe construction that has the following very useful property. Once you perform the computation itself without the just the computation and write down the computation to blow, then there is the way to generate every simple off the PCP in just only logarithmic time. So this means that you can, in parallel after computing the function itself, you can empire led, compute the whole PCP in just falling over it. Next time this gives you this gives us a great argument system with just t plus Polly locked parallel time instead of three times for luck tea time. But for this we need about the process service, which is prohibitively large. This is where sparks come in. We introduced the notion, or the paradigm off, computing the proof in part to the computation, not after the computation is done slightly more formally. What spark is it's just a succinct argument of knowledge, like what we said before, with the very fired and communication of Leslie being small but now we also require approval for which is super efficient. Namely, it can be paralyzed able. And it has to finish the proof together with the computation in Time T plus volatility, which essentially the best you can hope for. And we want to prefer to do so only with political rhythmic number off processors. You can also extend the definition to handling computations, which are to begin with a paralyze herbal. But I will not touch upon this. In the stock, you can see the paper. For the >>girls, we have two main results. The first main result is the construction of an interactive spark. It's just four rounds, and it is assumes Onley collisions is not hash functions. The second result is a non interactive spark. This result also assumes career resistant hash functions and in addition, the existence off any snark and namely succinct, non interactive argument of college that does not have to be a super efficient in terms of programming time. Slightly more generally, the two theories follow from >>combined framework, which takes essentially any argument of knowledge and turns it into a spark by assuming on a collision system, hash functions and maybe the multi behind the construction could be viewed as a trade off between computation time and process. Source. Winston. She ate theorem one using Killings protocol, which is an argument of knowledge, which is a four round argument of knowledge. And we insensate you're into using its not which is an argument knowledge. Just by definition, let me tell you what are the main ideas underlying our construction before telling you to control the ideas. Let me make some simplifying assumptions. The first assumption I will only be talking about the non interactive regime. The second example assumption is that I'm going to assume snark, which is a non interactive 16 argument of knowledge. And then we'll assume that's not the snark which is super efficient. So it will consumed other time to t for computation that takes 20 so almost what we want, but just not yet, not not yet there. I will assume that the computation that we want to perform a sequential and additionally I will assume that the computation has no >>space, namely its ah, or it has very low space. So think about the sequential computation, which MM doesn't have a lot of space or even zero for the for the time being, I would like to discuss how to simplify, how to remove this simplifying assumptions. So the starting idea is based on two works off a nettle and duckling. It'll from a couple of years ago. And here's how it works. So >>remember, we want toe performative time. Computation generated proof and we need to finish roughly by time. T. So the idea is to run half of the computation, which is what we can afford because we have a snark that can generate a proof in additional to over two steps so we can run the complete half of the computation and prove that half of the computation all in time T. And the idea is that now we can recursive Lee computer improve the rest of the computation in Parliament. Here's how it looks like. So you run half of the computation, started proof, and then you run a quarter of the remaining half of the remaining computation, which is a quarter of the original one, and prove it. And in parallel again, you take another eighth of the computation, which is one half of what's left and so on. And so forth. As you can see, that eventually will finish the whole computation. And you only need something like logarithmic Lee. Many parallel processors and the communication complexity and verifies running time only grow by algorithmic >>factor. So this is the main idea. Let's go back to the simplifying assumptions we have. So the first one was that I'm only gonna talk about the new interactive regime. You have to believe me that the same ideas extend to the interactive case, which is a little bit more massive with notation. But the ideas extent so I will not talk about it anymore. The second assumption I had was that I have a super efficient start, so it had over had two T >>40 time computation again. You have to believe me that if you work out the math, then the ideas extend to starts with quasi linear overhead. Namely, starts that working time tee times, Polly locked e and then the result extends to any snark because of a result because of a previous work will be tense. Kettle, who showed that a snark with the proof it runs in polynomial time can be generically translated into a snark where the programs in quasi linear with quasi linear overhead. So this gives a result from any stark not only from pretty efficient starts. The last bullet was about the fact that we're dealing with only with sequential Ram computations. And again, you have to believe me that the ideas can be extended toe tyrants And the last assumption which is the focus of this work is how to get rid of the small space assumption. This is what I'm gonna be talking next. Let's see what goes wrong. If the if the computation has space, remember what we did in the previous. In a couple of slides ago, the construction was toe perform. Half of the computation prove it and then half of the remaining computation prove it. And >>so on. If you write down the statement that each of these proofs proofs, it's something like that a machine m on input X executed for some number of steps starting from some state ended at some other state. And if you notice the statement itself depends on the space of the computation, well and therefore, if the space of the computation is nontrivial, the statements are large and therefore the communication will be large and therefore the very fire will have toe be running time, proportional to the space and so on. So we don't even get a saint argument if we do it. Neighborly. Here's a solution for this problem. You can say, Well, you don't have to include the space in the whole space. In the statement, you can include only a digest of the space. Think about some hash function of the space. So indeed, you can modify the statement to not include the space, but only a digest. And now the statement will be a little bit more complicated. It will be that there exists some initial state end state such that there hush is consistent with digest in the statement. And if you run the machine M for K state and for K steps starting from the initial space, you end up with the final space. So this is great. It indeed solves the communication complexity problem in the very far complexity problem. But notice that from the proof for site, we didn't actually do anything because we just move, pushed the complexity in tow. The weakness. So the proof is running. Time is still very large with this solution. Yeah. Our final solution, if in a very high level, is to compress the witness. So instead of using the whole space is the witness. We will be using the computation itself in the computation that we ran as the witness. So now the statement will be off the same form, so it will still be. It will still consist off to digests and machine. But now the the witness will be not the whole state. But it will be the case steps that we performed. Namely, it will be that there exists case steps that I performed such that if I run >>the machine m on this case steps and I started with a digest and I just start and I applied this case steps on the digest. I will end up with the Final Digest. In order to implement this, we need some sort off a nap. Datable digest. This is not really hard, not so hard to obtain because you could just do something like a miracle tree. It's not hard to see that you can add the locations in the medical tree quite efficiently. But the problem is that we need toe toe to compute those updates. Not only not only we need toe be ableto update the hash browns, the hush or the largest which don't also be able to compute the updates in parallel to the computation. And to this end, we introduce a variant of Merkle trees and show how to perform all of those updates level by level in the in the Merkel tree in a pipeline in fashion. So namely, we push the updates off the digest in toe the Merkel tree, one after the other without waiting for the previous ones to end. And here we're using the tree structure off Merkle trees. So that's all I'm gonna say about the protocol. I'm just gonna end with showing you how the final protocol looks like We run case steps of computations. Okay, one steps of computation and we compute the K updates for those case steps in violent the computation. So every time we run a step of computation, we also update start an update off our digest. And once we are finished computing all the updates, we can start running a proof using those updates as witness and were forcibly continuing this way as a conclusion this results with the spark namely 1/16 argument system with the proof is running Time t plus for you Look, team and no times and all we need is something like quality of arrhythmic number of processors. E would like to mention that this is a theoretical result and by no means should be should be taken as a za practical thing that should be implemented. But I think that it is important to work on it. And there is a lot of interesting questions on how to make this really practical and useful. So with that, I'm gonna end and thank you so much for inviting me and enjoy the sandwich.
SUMMARY :
protocol between the prove prove er and the verifier who share some instance X, In terms of the proof is running time, we don't require anything except that it's, for example, first to study this problem. extend the definition to handling computations, which are to begin with a and in addition, the existence off any snark and namely succinct, is that I'm going to assume snark, which is a non interactive 16 argument So the starting idea is based on two works off a nettle and duckling. remaining half of the remaining computation, which is a quarter of the original one, and prove But the ideas extent so I will not talk about it anymore. out the math, then the ideas extend to starts with quasi linear overhead. But notice that from the proof for site, we didn't actually do anything because we just But the problem is that we need toe toe to compute those updates.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ellen Komarovsky | PERSON | 0.99+ |
Winston | PERSON | 0.99+ |
Killian | PERSON | 0.99+ |
Kettle | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
two theories | QUANTITY | 0.99+ |
Raphael | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Freytag | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Leslie | PERSON | 0.99+ |
Polly | PERSON | 0.99+ |
second assumption | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Cody | PERSON | 0.99+ |
four rounds | QUANTITY | 0.99+ |
eighth | QUANTITY | 0.98+ |
three times | QUANTITY | 0.98+ |
zero | QUANTITY | 0.98+ |
Lee | PERSON | 0.98+ |
second result | QUANTITY | 0.98+ |
each | QUANTITY | 0.97+ |
four round | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
two main results | QUANTITY | 0.96+ |
one steps | QUANTITY | 0.94+ |
over two steps | QUANTITY | 0.93+ |
half | QUANTITY | 0.91+ |
16 | QUANTITY | 0.91+ |
Half | QUANTITY | 0.91+ |
second example | QUANTITY | 0.9+ |
a month | QUANTITY | 0.88+ |
Merkle | OTHER | 0.87+ |
couple of years ago | DATE | 0.83+ |
Entities | EVENT | 0.82+ |
one half | QUANTITY | 0.79+ |
two T | QUANTITY | 0.77+ |
first main result | QUANTITY | 0.76+ |
half of | QUANTITY | 0.76+ |
40 time | QUANTITY | 0.74+ |
one | QUANTITY | 0.72+ |
1/16 | QUANTITY | 0.68+ |
Onley | PERSON | 0.62+ |
couple of | DATE | 0.6+ |
Summit | ORGANIZATION | 0.48+ |
several decades | QUANTITY | 0.47+ |