Byron Cook, Amazon | AWS re:Inforce 2019
>> live from Boston, Massachusetts. It's the Cube covering A W s reinforce 2019 brought to you by Amazon Web service is and its ecosystem partners. >> Hey, welcome back, everyone to Cubes. Live coverage here in Boston, Massachusetts for eight of us reinforced Amazon Web service is inaugural event around Cloud Security. I'm Jeffrey Day Volante. Two days of coverage. We're winding down Day two. We're excited to have a year in The Cube Special guest, part of Big and that one of the big announcements. Well, I think it's big. Nerdy Announcement is the automated reasoning. Byron Cook, director of the Automated Reasoning Group within AWS. Again, this is part of the team that's gonna help figure out security use automation to augment humans. Great to have you on big part of show here. Thanks very much to explain the automated reasoning group. Verner Vogel had a great block post on All things distributed applies formal verification techniques in an innovative way to cloud security and compliance for our customers. For our own there's developers. What does that mean? Your math? >> Yeah, let me try. I'll give you one explanation, and if I puzzle, you all try to explain a different way. 300 So do you know the Pythagorean Theorem? Yeah, sure, Yeah. So? So that the path I agree in theory is about all triangles that was proved in approximately B. C. It's the proof is a finite description in logic as to why it's true and holds for all possible triangles. So we're basically using This same approach is to prove properties of policies of networks of programs, for example, crypto virtualization, the storage, et cetera. So we write software. This finds proofs in mathematics and this the proofs are the same as what you could found for thuggery and should apply into >> solve problems that become these mundane tasks of checking config files, making sure things are that worries kind of that's I'll give you an example. So so that's two in which is the T. L s implementation used, for example, in history. But the large majority >> of AWS has approximately 12,000 state holding elements, so that with if you include the stack of the heat usage, so the number >> of possible >> states it could reach us to to the 12,000. And if you wanted to show that the T. L s handshake Implementation is correct or the H Mac implementation is correct. Deterministic random bit generator implementation is correct, which is what we do using conventional methods like trying to run tests on it. So you would need, if you have, like, 1,000,000 has, well, microprocessors and you would need many more lifetimes in the sun is gonna admit light at 3.4 $4,000,000,000 a year to test to exhaustively test the system. So what we do is we rather than just running a bunch of inputs on the code, we we represent that as the mathematical system and then we use proof techniques, auto automatically search for a proof and with our tools, we in about 10 minutes or able to prove all those properties of s two in the way of your intimidates. And then we apply that to pieces of s three pieces of easy to virtual ization infrastructure on. Then, uh, what we've done is we've realized that customers had a lot of questions about their networks and their policies. So, for example, they have a complicated network worldwide different different availability zones, different regions on. They want to ask. Hey, does there exist away for this machine to connect to this other machine. Oh, are you know, to do all this all SS H traffic coming in that eventually gets to my Web server, go through a bastion host, which is the best, best practice. And then we can answer that question again, using logic. So we take the representation that semantics of easy to networking the policy, the network from the customer, and then the question we're asking, expressing logic. And we throw a big through their call ifthere improver, get the answer back. And then same for policy. >> So you're analyzing policies, >> policies, networks, programs, >> networks, connections. Yeah, right. And it to the tooling is sell cova. Eso >> eso eso basically way come with We come with an approach and then we have many tools that implement the approach on different, different problems. That's how you apply Volkova all underneath. It's all uses of a kind of tool called SMT inside. So there's a south's over, uh, proves theorems about formula and proposition. A logic and SMT is sat modular theories. Those tools can prove properties of problems expressed in first order logic. And so what we do is we take the, for example, if you have a question about your policies answering, answering semantic level questions about policies is actually a piece space problem. So that's harder than NP complete. We express the question in logic and then call the silvery and they get their answer back on Marshall it back. And that's what Volkova does. So that's calling a tool called CVC four, which is which is an open source. Prove er and we wenzel Koval. We take the policy three question encoded to logic. Call a Silver and Marshall answer back. >> What's the What's the root of this? I mean, presumably there's some academic research that was done. You guys were applying it for your specific use case, But can you share with this kind of He's the origination of this. >> So the first Impey complete problem was discovered by a cook and not not me. Another cook the early seventies on. So he proved that the proposition a ll satisfy ability problem is impeccably and meanwhile, there's been a lot of research from the sixties. So Davis and Putnam, for example, I think a paper from the mid sixties where they were, we're trying to answer the question of can we efficiently solved this NP complete problem proposition will satisfy ability on that. Researchers continue. There have been a bunch of breakthroughs, and so now we're really starting to see very from. There's a big breakthrough in 2001 on, then some and then some further breakthroughs in the 5 4008 range. So what we're seeing is that the solvers air getting better and better. So there's an international competition of Let's Save, usually about 30 silvers. And there's a study recently where they took all of the winners from this competition each year 2001 5 4008 30 2002 to 2011 and compared them on the same bench marks and hardware, and the 2002 silver is able to solve 1/4 of the benchmarks in the 2011 solved practically all of them and then the the 2019 silvers, or even better. Nowadays they can take problems and logic that have many tens of millions of variables and solve them very efficiently. So we're really using the power of those underlying solvers and marshaling the questions to those to those overs, codifying thinking math. And that's the math. The hour is you gave a talk in one sessions around provable security. Kind of the title proves provable. >> What's what is that? What is that? Intel. Can you just explain that concept and sure, in the top surfaces. So, uh, uh, >> so mathematical logic. You know, it's 2000 years old, right? So and has refined Sobule, for example, made logic less of a philosophical thing and more of a mathematical thing. Uh, and and then automated reasoning was sort of developed in the sixties, where you take algorithms and apply algorithms to find proofs and mathematical logic. And then provable security is the application of automated reasoning to questions and security and compliance. So we you wanna prove absence of memory, corruption errors and C code You won't approve termination of of event handling routines that are supposed to handle security events. All of those questions, their properties of your program. And you can use these tools to automatically or uh oh, our find proofs and then check The proofs have been found manually. That's what that's >> where approvable security fix. What was the makeup of the attendee list where people dropping this where people excited was all bunch of math geeks. You have a cross section of great security people here, and they're deep dive conversations Not like reinvent this show. This is really deep security. What was some of the feedback and makeup of the attendees? >> Give you two answers because I actually gave to talks. And the and the answers are a little bit different because the subject of the talk So there was one unprovable security, which was a basically the foundation of logic And how we how Cheers since Volkova and our program, because we also prove correctness of crypto and so on. So those tools and so that was largely a, uh uh, folks who had heard about it. And we're wanting to know more, and we're and we're going to know how we're using it and trying to learn there was a second talk, which was about the application of it to compliance. So that was with Tomic, Andrew, who is the CEO of Coal Fire, one of the third party auditors that AWS uses in a lot of customers used and also Chad Wolf, who's vice president of security, focused on compliance. And so the three of us spoke about how we're using it internally within eight of us to automate, >> uh, >> certification compliance, sort of a commission on. So that crowd was really interesting mixture of people interested in automated reasoning and people interested in compliance, which are two communities you wouldn't think normally hang together. But that's sort of like chocolate and peanut butter. It turns out to be a really great application, >> and they need to work together to, because it is the world. The action is they don't get stuck in the compliance and auditing fools engineering teams emerging with old school compliance nerds. So there's a really interesting, uh, sort of dynamic to proof that has a like the perfect use casing compliance. So the problem of like proving termination of programs is undecided ble proving problems and proposition a logic is np complete as all that sounds very hard, difficult and you use dearest six to solve this problem. But the thing is that once you've found a proof replaying, the proof is linear and size of the proof, so actually you could do extremely efficiently, and that has application and compliance. So one could imagine that you have, for example, PC I hip fed ramp. You have certain controls that you want to prove that the property like, for example, within a W s. We have a control that all data dressed must be encrypted. So we are using program verification tools, too. Show that of the code base. But now, once we've run that tool that constructs a proof like Euclid founded the sectarian serum that you can package up in a file hand to an auditor. And then a very simple, easy to understand third party open source tool could replay that proof. And so that becomes audit evidence. It's a scale of total examples >> wth e engineering problem. You're solving a security at scale. The business problem. You're solving it. Yeah. His customers are struggling. Just implementing There just >> aren't enough security professionals to hire right? So the old day is, the talk explains. It's out there all on YouTube's. The people watching the show can go check it out. But I am by the way I should I should make a plug for if you Google a W s provable security. There's a Web page on eight of us that has papers and videos and lots of information, so you might wanna check that out. I can't remember what I was answering now, but >> it's got links to the academic as >> well. Oh, yes. Oh, yes. That was the point that Tommy Kendra is pointing out, as in the old days, you would do an audit would come in to be a couple minutes box that we win this box. You check a few things to be a little network. Great. But now you have machines across the world, extremely complex networks, interaction between policies, networks, crypto, etcetera. And so there's There's no way a human or even a team of human could come in and have any reasonable chance of actually deeply understanding the system. So they just sort of check some stuff and then they call it success. And these tools really allow you to actually understand the entire system buyer and you guys doing some cutting edge work, >> folks watching and want to know how math translates into the real world with all your high school kids out their parents. This is stuff you learn in school like you could be played great work. I think I think this is cutting edge. I think math and the confidence of math intersects with groups. The compliance example audited example shows that world's gonna come together with math. I think this is a big mega trend. It's gonna not eliminate the human element. It's going augment that so great stuff, its final question just randomly. And while you're here, since your math guru we're always interested, we always covering our favorite topic of Blockchain, huh? We believe that a security conference is gonna soon have a Blockchain component because because of the mutability of it, there's a lot of math behind it. So as that starts to mature certainly Facebook entering him at their own currency. Whole nother conversation you don't want to have here is bring a lot of attention. So we see the intersection of security being a supply chain problem in the future. Your thoughts on that just generally. So So the problem of proving programs is undecided, and that means that you can't build a general solution. What you're gonna have to do is look >> for niche areas like device drivers, networks, policies, AP, I used to dream crypto et cetera, and then make the tools work for that area, and you will have to be comfortable with the idea that occasionally the tools aren't gonna be able to find an answer. And so the Amazon culture of being customer obsessed and working as closely as possible with the customer has been really helpful to my community of of logic, uh, full methods, practitioners, because they were really forced to work with a customer, understand the problem. So what I've been doing is listening to the customer on finding out what the problems with concerns. They are focusing my attention on that. And I haven't yet heard of, uh, of customers asking for mathematical proof on crypto currency Blockchain sorts of stuff. But I'm I I await further and you're intrigued. Yeah, I'm s I always like mathematics, but where we have been hearing customers asked for help is for Temple. We're working on free Our toss s o i o T applications Understand the networks that are connecting up the coyote to the cloud, understanding the correctness of machine learning. So why, why So I reused. I've done some machine learning. I've constructed a model. How do I know what it does? And is it compliant? Does it respect hip fed ramp PC, i et cetera, and some other issues like that. >> There's a lot of talk in the industry about quantum computing and creating nightmares for guys like you. How much thought given that you have any thing that you can share with us? >> Yes. Oh, there's there's work in the AWS crypto team preparing for the post quantum world. So imagine Adversary has quantum computer. And so there are proposals on eight of us has a number of proposals, and we've and those proposals have been implemented. So their standards and we've our team has been doing proof on the correctness of those. So, actually, in the one of my talks, I think the talk not with Chad and Tom. I show a demo of our work to prove the correctness of someplace quantum code. >> So, Byron, thank you for coming on the inside. Congratulations on the automated reason. Good to see it put in the practice and appreciate the commentary. Thank you very much. Thank you. Here for the first inaugural security cloud security event reinforced AWS is putting on cube coverage. I'm John Fairy with Day Volonte. Thanks for watching
SUMMARY :
A W s reinforce 2019 brought to you by Amazon Web service is part of Big and that one of the big announcements. So that the path I agree in theory is about all triangles that was proved in approximately kind of that's I'll give you an example. So you would need, if you have, like, And it to the tooling is And so what we do is we take the, for example, if you have a question about your policies answering, What's the What's the root of this? So the first Impey complete problem was discovered by a cook and in the top surfaces. So we you wanna prove absence What was the makeup of the attendee list where people dropping this where people excited was all bunch And so the three of us spoke about how we're using it internally within So that crowd was really interesting mixture of So one could imagine that you have, for example, The business problem. But I am by the way I should I should make a plug for if you Google a W s provable as in the old days, you would do an audit would come in to be a couple minutes box that we win this box. So So the problem of proving programs And so the Amazon culture of being customer obsessed and working as There's a lot of talk in the industry about quantum computing and creating nightmares So, actually, in the one of my Here for the first inaugural security cloud security event reinforced
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Byron | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chad Wolf | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
Byron Cook | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Coal Fire | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2011 | DATE | 0.99+ |
Tommy Kendra | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Two days | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
1,000,000 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
12,000 | QUANTITY | 0.99+ |
two communities | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Jeffrey Day Volante | PERSON | 0.99+ |
Davis | PERSON | 0.99+ |
one explanation | QUANTITY | 0.99+ |
Volkova | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2002 | DATE | 0.99+ |
two answers | QUANTITY | 0.99+ |
2000 years | QUANTITY | 0.98+ |
second talk | QUANTITY | 0.98+ |
2019 | DATE | 0.98+ |
one sessions | QUANTITY | 0.98+ |
Verner Vogel | PERSON | 0.98+ |
each year | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.98+ |
Marshall | PERSON | 0.98+ |
3.4 $4,000,000,000 | QUANTITY | 0.98+ |
sixties | DATE | 0.97+ |
Putnam | PERSON | 0.97+ |
Tomic | PERSON | 0.97+ |
six | QUANTITY | 0.97+ |
mid sixties | DATE | 0.97+ |
three pieces | QUANTITY | 0.96+ |
Automated Reasoning Group | ORGANIZATION | 0.96+ |
John Fairy | PERSON | 0.95+ |
about 10 minutes | QUANTITY | 0.95+ |
about 30 silvers | QUANTITY | 0.94+ |
Impey | PERSON | 0.94+ |
ORGANIZATION | 0.94+ | |
three question | QUANTITY | 0.94+ |
early seventies | DATE | 0.93+ |
Amazon Web service | ORGANIZATION | 0.93+ |
wenzel Koval | PERSON | 0.91+ |
a year | QUANTITY | 0.86+ |
approximately 12,000 state | QUANTITY | 0.86+ |
tens of millions of variables | QUANTITY | 0.84+ |
Amazon Web service | ORGANIZATION | 0.84+ |
CVC four | TITLE | 0.74+ |
1/4 of | QUANTITY | 0.73+ |
Cube | ORGANIZATION | 0.73+ |
eight of | QUANTITY | 0.73+ |
Cube | TITLE | 0.72+ |
one unprovable | QUANTITY | 0.72+ |
300 | OTHER | 0.71+ |
Day Volonte | PERSON | 0.7+ |
H Mac | TITLE | 0.69+ |
T. | TITLE | 0.64+ |
5 | OTHER | 0.61+ |
Let's | TITLE | 0.58+ |
Euclid | PERSON | 0.58+ |
Intel | ORGANIZATION | 0.56+ |
Temple | ORGANIZATION | 0.54+ |
Cubes | ORGANIZATION | 0.53+ |
us | QUANTITY | 0.51+ |
Steve Wilkes, Striim | Big Data SV 2018
>> Narrator: Live from San Jose it's theCUBE. Presenting Big Data Silicon Valley. Brought to you by SiliconANGLE Media and its ecosystem partners. (upbeat music) >> Welcome back to San Jose everybody, this is theCUBE, the leader in live tech coverage and you're watching BigData SV, my name is Dave Vellante. In the early days of Hadoop everything was batch oriented. About four or five years ago the market really started to focus on real time and streaming analytics to try to really help companies affect outcomes while things were still in motion. Steve Wilks is here, he's the co-founder and CTO of a company called Stream, a firm that's been in this business for around six years. Steve welcome to theCUBE, good to see you. Thanks for coming on. >> Thanks Dave it's a pleasure to be here. >> So tell us more about that, you started about six years ago, a little bit before the market really started talking about real time and streaming. So what led you to that conclusion that you should co-found Steam way ahead of its time? >> It's partly our heritage. So the four of us that founded Stream, we were executives at GoldenGate Software. In fact our CEO Ali Kutay was the CEO of GoldenGate Software. So when we were acquired by Oracle in 2009, after having to work for Oracle for a couple years, we were trying to work out what to do next. And GoldenGate was replication software right? So it's moving data from one place to another. But customers would ask us in customer advisory boards, that data seems valuable, it's moving. Can you look at it while it's moving and analyze it while it's moving, get value out of that moving data? And so that was kind of set in our heads. And then we were thinking about what to do next, that was kind of the genesis of the idea. So the concept around Stream when we first started the company was we can't just give people streaming data, we need to give them the ability to process that data, analyze it, visualize it, play with it and really truly understand the data. As well as being able to collect it and move it somewhere else. And so the goal from day one was always to build a full end-to-end platform that did everything customers needed to do for streaming integration analytics out of the box. And that's what we've done after six years. >> I got to ask a really basic question, so you're talking about your experience at GoldenGate moving data from point a to point b and somebody said well why don't we put that to work. But is there change data or was it static data? Why couldn't I just analyze it in place? >> GoldenGate works on change data. >> Okay so that's why, there was changes going through. Why wait until it hits its target, let's do some work in real time and learn from that, get greater productivity. And now you guys have taken that to a new level. That new level being what? Modern tools, modern technologies? >> A platform built from the ground up to be inherently distributed, scalable, reliable with exactly one's processing guarantees. And to be a complete end-to-end platform. There's a recognition that the first part of being able to do streaming data integration or analytics is that you need to be able to collect the data right? And while change data captured from databases is the way to get data out of databases in a streaming fashion, you also have to deal with files and devices and message queues and anywhere else the data can reside. So you need a large number of different data collectors that all turn the enterprise data sources into streaming data. And similarly if you want to store data somewhere you need a large collection of target adapters that deliver to things. Not just on premise but also in the cloud. So things like Amazon S3 or the cloud databases like Redshift and Google BigQuery. So the idea was really that we wanted to give customers everything they need and that everything they need isn't trivial. It's not just, well we take Apache Kafka and then we stuff things into it and then we take things out. Pretty often, for example, you need to be able to enrich data and that means you need to be able to join streaming data with additional context information, reference data. And that reference data may come form a database or from files or somewhere else. So you can't call out to the database and maintain the speeds of streaming data. We have customers that are doing hundreds of thousands of events per second. So you can't call out to a database for every event and ask for records to enrich it with. And you can't even do that with an external cache because it's just not fast enough. So we built in an in-memory data grid as part of our platform. So you can join streaming data with the context information in real time without slowing anything down. So when you're thinking about doing streaming integration, it's more than just moving data around. It's ability to process it and get it in the right form, to be able to analyze it, to be able to do things like complex event processing on that data. And also to be able to visualize it and play with it is an essential part of the whole platform. >> So I wanted to ask you about end-to-end. I've seen a lot of products from larger, maybe legacy companies that will say it's end-to-end but what it really is, is a cobbled together pieces that they bought in and then, this is our end-to-end platform, but it's not unified. Or I've seen others "Well we've got an end-to-end platform" oh really, can I see the visualization? "Well we don't have visualization "we use this third party for visualization". So convince me that you're end-to-end. >> So our platform when you start with it you go into a UI, you can start building data flows. Those data flows start from connectors, we have all the connectors that you need to get your enterprise data. We have wizards to help you build those. And so now you have a data stream. Now you want to start processing that, we have SQL-based processing so you can do everything from filtering, transformation, aggregation, enrichment of data. If you want to load reference data into memory you use a cache component to drag that in, configure that. You now have data in-memory you can join with your streams. If you want to now take the results of all that processing and write it somewhere, use one of our target connectors, drag that in so you've got a data flow that's getting bigger and bigger, doing more and more processing. So now you're writing some of that data out to Kafka, oh I'm going to also add in another target adaptor write some of it into Azure Blob Storage and some of it's going to Amazon Redshift. So now you have a much bigger data flow. But now you say okay well I also want to do some analytics on that. So you take the data stream, you build another data flow that is doing some aggregation of a Windows, maybe some complex event processing, and then you use that dashboard builder to build a dashboard to visualize all of that. And that's all in one product. So it literally is everything you need to get value immediately. And you're right, the big vendors they have multiple different products and they're very happy to sell you consulting to put them all together. Even if you're trying to build this from open source and you know, organizations try and do that, you need five or six major pieces of open source, a lot of support in libraries, and a huge team of developers to just build a platform that you can start to build applications on. And most organizations aren't software platform companies, they're finance companies, oil and gas companies, healthcare companies. And they really want to focus on solving business problems and not on reinventing the wheel by building a software platform. So we can just go in there and say look; value immediately. And that really, really helps. >> So what are some of your favorite use cases, examples, maybe customer examples that you can share with me? >> So one of the great examples, one of my customers they have a lot of data in our HP non-stop system. And they needed to be able to get visibility into that immediately. And this was like order processing, supply chain, ERP data. And it would've taken a very large amount of time to do analytics directly on the HP nonstop. And finding resources to do that is hard as well. So they needed to get the data out and they need to get it into the appropriate place. And they recognize that use the right technology to ask the right question. So they wanted some of it in Hadoop so they could do some machine learning on that. They wanted some of it to go into Kafka so they could get real time analytics. And they wanted some of it to go into HBase so they could query it immediately and use that for reference purposes. So they utilized us to do change data capture against the HP nonstop, deliver that datastream out immediately into Kafka and also push some of it into HEFS and some of it into HBase. So they immediately got value out of that, because then they could also build some real-time analytics on it. It would sent out alerts if things were taking too long in their order processing system. And allowed them to get visibility directly into their process that they couldn't get before with much fewer resources and more modern technologies than they could have used before. So that's one example. >> Can I ask you a question about that? So you talked about Kafka, HBase, you talk about a lot of different open source projects. You've integrated those or you've got entries and exits into those? >> So we ship with Kafka as part of our product. It's an optional messaging bus. So, our platform has two different ways of moving data around. We have a high-speed, in-memory only message bus and that works almost network speed and it's great for a lot of different use cases. And that is what backs our data streams. So when you build a data flow, you have streams in between each step, that is backed by an in-memory bus. Pretty often though, in use cases, you need to be able to potentially rewind data for recovery purposes or have different applications running at different speeds and that's where a persistent message bus like Kafka comes in but you don't want to use a persistent message bus for everything because it's doing IO and it's slowing things down. So you typically use that at the beginning, at the sources, especially things like IOT where you can't rewind into them. Things like databases and files, you can rewind into them and replay and recover but IOT sources, you can't do that. So you would push that into a Kafka backed stream and then subsequent processing is in-memory. So we have that as part of our product. We also have Elastic as part of our product for results storage. You can switch to other results storage but that's our default. And we have a few other key components that are part of our product but then on the periphery, we have adapters integrate with a lot of the other things that you mentioned. So we have adapters to read and write HDFS, Hive, HBase, Across, Cloudera, Autumn Works, even MapR. So we have the MapR versions of the file system and MapR streams and MapR DB and then there's lots of other more proprietary connectors like CVC from Oracle, and SQL server, and MySQL and MariaDB. And then database connectors for delivery to virtually any JDBC compliant database. >> I took you down a tangent before you had a chance. You were going to give us another example. We're pretty much out of time but if you can briefly share either that or the last word, I'll give it to you. >> I think the last word would be that that is one example. We have lots and lots of other types of use cases that we do including things like: migrating data from on-premise to the cloud, being able to distribute log data, and being able to analyze that log data being able to do in-memory analytics and get real-time insights immediately and send alerts. It's a very comprehensive platform but each one of those use cases are very easy to develop on their own and you can do them very quickly. And of course as the use case expands within a customer, they build more and more and so they end up using the same platform for lots of different use cases within the same account. >> And how large is the company? How many people? >> We are around 70 people right now. >> 70 People and you're looking for funding? What rounds are you in? Where are you at with funding and revenue and all that stuff? >> Well I'd have to defer to my CEO for those questions. >> All right, so you've been around for what, six years you said? >> Yeah, we have a number of rounds of funding. We had initial seed funding then we had the investment by Summit Partners that carried us through for a while. Then subsequent investment from Intel Capital, Dell EMC, Atlantic Bridge. And that's where we are right now. >> Good, excellent. Steve, thanks so much for coming on theCUBE, really appreciate your time. >> Great, it's awesome. Thank you Dave. >> Great to meet you. All right, keep it right there everybody, we'll be back with our next guest. This is theCUBE. We're live from BigData SV in San Jose. We'll be right back. (techno music)
SUMMARY :
Brought to you by SiliconANGLE Media the market really started to focus So what led you to that conclusion So it's moving data from one place to another. I got to ask a really basic question, And now you guys have taken that to a new level. and that means you need to be able to So I wanted to ask you about end-to-end. So our platform when you start with it And they needed to be able to get visibility So you talked about Kafka, HBase, So when you build a data flow, you have streams We're pretty much out of time but if you can briefly to develop on their own and you can do them very quickly. And that's where we are right now. really appreciate your time. Thank you Dave. Great to meet you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Steve Wilks | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Steve Wilkes | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Intel Capital | ORGANIZATION | 0.99+ |
GoldenGate Software | ORGANIZATION | 0.99+ |
Ali Kutay | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
GoldenGate | ORGANIZATION | 0.99+ |
Kafka | TITLE | 0.99+ |
San Jose | LOCATION | 0.99+ |
Stream | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
Atlantic Bridge | ORGANIZATION | 0.99+ |
six years | QUANTITY | 0.99+ |
Steam | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
MapR | TITLE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
70 People | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
MariaDB | TITLE | 0.99+ |
Striim | PERSON | 0.99+ |
SQL | TITLE | 0.99+ |
one | QUANTITY | 0.98+ |
each step | QUANTITY | 0.98+ |
Summit Partners | ORGANIZATION | 0.98+ |
two different ways | QUANTITY | 0.97+ |
first part | QUANTITY | 0.97+ |
around six years | QUANTITY | 0.97+ |
around 70 people | QUANTITY | 0.96+ |
HBase | TITLE | 0.96+ |
one example | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
BigData SV | ORGANIZATION | 0.94+ |
Big Data | ORGANIZATION | 0.92+ |
Hadoop | TITLE | 0.92+ |
one product | QUANTITY | 0.92+ |
each one | QUANTITY | 0.91+ |
six major pieces | QUANTITY | 0.91+ |
About four | DATE | 0.91+ |
CVC | TITLE | 0.89+ |
first | QUANTITY | 0.89+ |
about six years ago | DATE | 0.88+ |
day one | QUANTITY | 0.88+ |
Elastic | TITLE | 0.87+ |
Silicon Valley | LOCATION | 0.87+ |
Windows | TITLE | 0.87+ |
five years ago | DATE | 0.86+ |
S3 | TITLE | 0.82+ |
JDBC | TITLE | 0.81+ |
Azure | TITLE | 0.8+ |
CEO | PERSON | 0.79+ |
one place | QUANTITY | 0.78+ |
Redshift | TITLE | 0.76+ |
Autumn | ORGANIZATION | 0.75+ |
second | QUANTITY | 0.74+ |
thousands | QUANTITY | 0.72+ |
Big Data SV 2018 | EVENT | 0.71+ |
couple years | QUANTITY | 0.71+ |
ORGANIZATION | 0.69+ |