Manish Devgan, Hazelcast | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe, 2022. Brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Licia Spain and cube con cloud native con 2022 Europe. I'm Keith Townsend, along with Paul Gillon senior editor, enterprise architecture for Silicon angle. We're gonna talk to some amazing folks. Day two coverage of Q con cloud native con Paul. We did the wrap up yesterday. Great. A great back and forth about what en Rico about yesterday's, uh, session. What are you looking for to today? >>I'm looking for, uh, to understand better, uh, how Kubernetes is being put into production, the types of applications that are being built on top of it. Yesterday, we talked a lot about infrastructure today. I think we're gonna talk a little bit more about applications, including with our first guest. >>Yeah, I was speaking our first guest. We have ish Degan CPO chief product officer at Hazelcast Hazelcast has been on the program before, but you, this is your first time in the queue, correct? >>It, it is Keith. Yeah. Well, >>Welcome to been Cuban. So we're talking data, which is always a fascinating topic. Containers are, have been known for not being supportive of stateful applications. At least you shouldn't hold the traditional thought. You shouldn't hold stateful data in containers. Tell me about the relationship between Hazel cast and containers we're at Cuan. >>Yeah, so a little bit about, uh, Hazelcast. We are a real time data platform and, uh, we are not a database, but a data platform because we basically allow, uh, data at rest as well as data in motion. So you can imagine that if you're writing an application, you can basically query and join a data coming in events, as well as data, which might have been persisted. So you can do both stream processing as well as, you know, low latency data access. And, and this platform of course, is supported on all the clouds. And we kind of delegate the orchestration of this kind of scale out system to Kubernetes. Um, and you know, that provides a resiliency and many things which go along with that. >>So you say you don't, you're not a database platform. What are you used for to manage the data? >>So we are, uh, we are memory first. So we are, you know, we started with low latency applications, but then we realized that real time has really become a business term. It's it's more of a business SLA mm-hmm, <affirmative>, it's really the, we see the opportunity, the punctuated change, which is happening in the market today is about real time data access to real time. I mean, there are real time applications. Our customers are building around real time offers, um, realtime thread detection. I mean, just imagine, you know, one of our customers like B and P par bars, they have, they basically originate a loan while the customer is banking. So you are in an ATM machine and you swipe your card and you are asking for, you know, taking 50 euros out. And at that point they can actually originate a custom loan offer based on your existing balance you're existing request and your credit score in that moment. So that's a value moment for them and they actually saw 400% loan origination go up because of that, because nobody's gonna be thinking about a credit, uh, line of credit after they're done banking. So it's in that value moment and we allow basically our data platform allows you to have fast access to data and also process incoming streams. So not before they get stored, but as they're coming in. >>So if I'm a developer and cuon is definitely a conference for developer and I, I come to the booth and I hear <inaudible>, that's the end value. I, I hear what I can do with my application. I guess the question is, how do I get there? I mean, uh, if it's not a database, how do I make a call from a container to, from my microservice to Hazel cath? Like, do I think of this as a, uh, a CNI or, or C CSI? How do I access >>PA care? Yeah. So, so we, uh, you know, we are, our server is actually built in Java. So a lot of the application which get written on top of the data platform are basically accessing through Java APIs. Or as you have a.net shop, you can actually use.net API. So we are basically an API first platform and SQL is basically the polyglot way of accessing data, both streaming data, as well as it store data. So most of the application developers, a lot of it is run done in microservices, and they're doing these fast get inputs for data. So they, they have a key, they want to get to a customer, they give a customer ID. And the beauty is that, um, while they're processing the events, they can actually enrich it because you need contextual information as well. So going back to the ATM example, you know, at that event happened, somebody swiped the card and ask for 50 euros, and now you want more information like credit score information, all that needs to be combined in that, in that value moment. >>So we allow you to do those joins and, you know, the contextual information is very important. So you see a lot of streaming platform out there, which just do streaming, but if you're an application developer, like you asked, you have to basically do call out to a streaming platform to get, um, to do streaming analytics and then do another call to get the context of that. You know, what is the credit score for this customer? But whereas in our case, because the data platform supports both streaming as well as data at rest, you can do that in one call and, you know, you don't want to have the operational complexity to stand out. Two different scale out servers is, is, is, is humongous, right? I mean, you want to build your business application. So, >>So you are querying data streaming data and data rest yes. In the same query >>Yes. In the same query. And we are memory first. So what happens is that we store a lot of the hot data in memory. So we have a scale out Ram based server. So that's where you get the low latency from. In fact, last year we did a benchmark. We were able to process a billion events a second, uh, with 99% of the latency under 30 milliseconds. So that kind of processing and that kind of power is, and, and the most important thing is determinism. I mean, you know, there's a lot of, um, if you look at real time, what real time is, is about this predictable latency at scale, because ultimately your, your adhering to a business SLA is not about milliseconds or microsecond. It's what your business needs. If your business needs that you need to deny or, uh, approve a credit credit card transaction in 50 milliseconds, that's your business SLA, and you need that predictability for every transaction. >>So talk to us about how how's this packaged in consumed. Cause I'm hearing a, a bunch of server Ram I'm hearing numbers that we're trying to adapt away from at this conference. We don't wanna see the onlay. We just want to use it. >>Yeah. So, so we kind of take a bit that, that complexity of managing this scale out, um, uh, uh, cluster, which actually utilizes Rams from each server. And then, you know, if you, you can configure it so that the hard set of data is in Ram, but the data, which is, you know, not so hard can actually go into a tiered storage model. So we are memory first. So, but what you are doing is you're doing simple, it's an API. So you do basically a crud, right? You create records, you read them through SQL. So for you, it's, it's, it's kind of like how you access that database. And we also provide you, you know, real time is also a journey. I mean, a lot of customers, you know, you don't want to rip their existing system and deploy another kind of scale out platform. Right? So we, we see a lot of these use cases where they have a database and we can sit in between the database, a system of record and the application. So we are kind of in between there. So that's, that's the journey you can take to real time. >>How does Kubernetes, uh, containers and Kubernetes change the game for real time analytics? >>Yeah. So, uh, Kubernetes does change it because what's hap first of all, we service most of the operational workloads. So it's, it's more on the, a lot of our customers. We have most, most of the big banks credit card companies in financial services and retail. Those are the two big sectors for us. And first of all, you know, a lot of these operational workloads are moving to the cloud and with move to the cloud, they're actually taking their existing applications and, and moving to, you know, one of the providers and to kind of orchestrate this scale out platform, which does auto scaling, that's where the benefit comes from mm-hmm <affirmative>. And it also gives them the freedom of choice. So, you know, the Kubernetes is, you know, a standard which goes across cloud providers. So that gives them the benefit that they can actually take their application. And if they want, they can actually move it to a different, a different cloud provider because we take away the orchestration complexity, you know, in that abstraction layer. >>So what happens when I need to go really fast? I mean, I, I, I need, uh, I'm looking at bare metal and I'm looking at really scaling a, a, a homogeneous application in a single data center set of data centers. Is there a bare metal play here? >>Yes. There, there, there are some very, very, uh, like if you want microsecond latency, mm-hmm, <affirmative>, um, you know, we have customers who actually store two to four terabytes in Ram and, and they can actually stand up. Um, you know, again, it depends on what kind of deployment you want. You can either scale up or scale out, scaling up is expensive, you know, because those boxes are not cheap, but if you have a requirement like that, where there is sub millisecond or microphone latency requirement, you could actually store the entire data set. I mean, a lot of the operational data sets are under four terabytes. So it's not uncommon that you could actually take the entire operational transactional data set, actually move, move that to a pure Ram. But, uh, I think now we, we also see that these operational workloads are also, there's a need for analytics to be done on top as well. >>I mean, we, going back to the example I gave you, so this, this, uh, customer is not only doing stream crossing, they're also influencing a machine learning algorithm in that same, in the same kind of cycle in the life cycle. So they might have trained a machine learning or algorithm on a data lake somewhere, but once they're ready, they're actually influencing the ML algorithm in our kind of life cycle right there. So, you know, that that really brings analytics and transactions kind of together because after all transactions are where the real, you know, insights are. >>Yeah. I'm, I'm struggling a little bit with this, with these two different use cases where I have transactional basically a transactional database or transactional data platform alongside a analytics platform. Those are two, like they're two different things. I have a, you know, I, I have spinning rust for one, and then I have memory and, and MBME for another. Uh, and that requires tuning requires DBAs. It requires a lot of overhead, there seems to be some type of secret sauce going on here. >>Yeah. Yeah. So, I mean, you know, we, we basically say that if you are, if you have a business case where you want to make a decision, you know, you, the only chance to succeed is where you are not making a decision tomorrow based on today's data. Right? I mean, the only way to act on that data is today. So the act is a keyword here. We actually let you generate a realtime offer. We, we let you do credit card fraud detection. In that moment, the analytics is about knowing less about acting on it. Right? Most of our applications are machine critical. They're acting on real time. I think when you talk about like the data lakes there, there's actually a real time there as well, but it's about knowing, and we believe that the operational side is where, you know, that value moment is there, you know, what good is, is to know about something tomorrow, you know, if something wrong happened, I mean, it, yeah, so there's a latency squeeze there as well, but we are on, on more on the kind of transaction and operational side. >>I gotcha. Yeah. So help me understand, like integrations. A lot of the, the, when I think of transactions, I'm thinking of SAP, Oracle, where the process is done, or some legacy banking or not legacy or new modern banking app, how does the data get from one platform to a, to Hazel cast so I can make those >>Decisions? Yeah. So we have, uh, this, the streaming engine, we have has a whole bunch of connectors to a lot of data sources. So in fact, most of our use cases already have data sources underneath there, their databases there's KA connectors, you know, joining us because if you look at it, events is, are comprised of transactions. So something, a customer did, uh, a credit card swipe, right. And also events events could be machine or IOT. So it's really unique connectivity and data ingestion before you can process that. So we have, uh, a whole suite of connectors to kind of bring data in, in our platform. >>We've been talking a lot, these last couple of days about, uh, about the edge and about moving processing capability closer to the edge. How do you enable that? >>Yeah. So edge is actually very, very relevant because of what's happening is that, um, you know, if you, if you look at like a edge deployment use case, um, you know, we have a use case where data is being pushed from these different edge devices to cloud data warehouse. Right. But just imagine that you want to be filtering data at the, at, at where it is being originated from, and you wanna push only relevant data to, to maybe a central data lake where you might want to do, you know, train your machine learning models. Mm-hmm <affirmative> so that at the edge, we are actually able to process that data. So Hazel cast will allow you to actually write a data pipeline and do stream processing so that you might want to just push, you know, a part or a subset of data, which applies by the rules. Uh, so there's, there's a big, um, uh, I think edge is, you know, there's a lot of data being generated and you don't want like garbage and garbage out there's there's, there is there's filtration done at the edge. So that only the relevant data lands in a data, data lake or something like that. >>Well, Monash, we really appreciate you stopping by realtime data is an exciting area of coverage for the queue overall from Valencia Spain, I'm Keith Townsend, along with Paul Gillon, and you're watching the queue, the leader in high tech coverage.
SUMMARY :
Brought to you by red hat, What are you looking for to today? the types of applications that are being built on top of it. product officer at Hazelcast Hazelcast has been on the program before, It, it is Keith. At least you shouldn't hold the traditional thought. So you can imagine that if you're writing an application, So you say you don't, you're not a database platform. So we are, you know, we started with low So if I'm a developer and cuon is definitely a conference for developer So a lot of the application which get written on top of the data platform are basically accessing through Java So we allow you to do those joins and, you know, the contextual information is very important. So you are querying data streaming data and data rest yes. I mean, you know, So talk to us about how how's this packaged in consumed. I mean, a lot of customers, you know, you don't want to rip their existing system and deploy another a different cloud provider because we take away the orchestration complexity, you know, So what happens when I need to go really fast? So it's not uncommon that you could after all transactions are where the real, you know, insights are. I have a, you know, I, I have spinning rust for one, you know, that value moment is there, you know, what good is, is to know about something tomorrow, not legacy or new modern banking app, how does the data get from one platform to a, you know, joining us because if you look at it, events is, are comprised of transactions. How do you enable that? um, you know, if you, if you look at like a edge deployment use Well, Monash, we really appreciate you stopping by realtime data is an
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Townsend | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
400% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Hazel cast | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Hazelcast | ORGANIZATION | 0.99+ |
50 milliseconds | QUANTITY | 0.99+ |
50 euros | QUANTITY | 0.99+ |
Keith | PERSON | 0.99+ |
Manish Devgan | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
today | DATE | 0.99+ |
Yesterday | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
first guest | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
50 euros | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
one call | QUANTITY | 0.99+ |
four terabytes | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
each server | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.98+ |
SAP | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.97+ |
under 30 milliseconds | QUANTITY | 0.97+ |
first platform | QUANTITY | 0.97+ |
a billion events | QUANTITY | 0.95+ |
Coon | ORGANIZATION | 0.94+ |
2022 | DATE | 0.94+ |
single | QUANTITY | 0.94+ |
two different things | QUANTITY | 0.94+ |
Kubecon | ORGANIZATION | 0.93+ |
Cloudnativecon | ORGANIZATION | 0.93+ |
two different use cases | QUANTITY | 0.92+ |
Day two | QUANTITY | 0.92+ |
two big sectors | QUANTITY | 0.91+ |
red hat | ORGANIZATION | 0.87+ |
Europe | LOCATION | 0.84+ |
use.net | OTHER | 0.83+ |
under four terabytes | QUANTITY | 0.82+ |
Two different scale | QUANTITY | 0.78+ |
Kubernetes | ORGANIZATION | 0.75+ |
a second | QUANTITY | 0.72+ |
Kubernetes | TITLE | 0.71+ |
cube con cloud native con | ORGANIZATION | 0.7+ |
cloud native con | ORGANIZATION | 0.67+ |
Degan | PERSON | 0.66+ |
Silicon | LOCATION | 0.63+ |
Licia Spain | ORGANIZATION | 0.62+ |
Hazel cath | ORGANIZATION | 0.61+ |
con cloud native con | ORGANIZATION | 0.58+ |
Rico | LOCATION | 0.57+ |
Cuban | OTHER | 0.56+ |
Monash | ORGANIZATION | 0.55+ |
Hazel | TITLE | 0.53+ |
Cuan | LOCATION | 0.53+ |
foundation | ORGANIZATION | 0.52+ |
Q | EVENT | 0.51+ |
last couple | DATE | 0.5+ |
CNI | TITLE | 0.46+ |
C | TITLE | 0.45+ |
Paul | PERSON | 0.44+ |
2022 | EVENT | 0.33+ |
Kelly Herrell, Hazelcast | RSAC USA 2020
>>Fly from San Francisco. It's the cube covering RSA conference, 2020 San Francisco brought to you by Silicon angle media. >>Hey, welcome back everyone. Cubes coverage here in San Francisco, the Moscone South. We're here at the RSA conference. I'm John, your host and the cube. You know, cybersecurity is now a global phenomenon, but cubbies have to move at the speed of business, which now is at the speed of the potential attacks. This is a new paradigm shift. New generation of problems that have to be solved and companies solving them. We have a hot startup here that's growing. Hazel cast, the CEO, Kelly Kelly Harrell is here. Cube alumni. Good to see you. Good to see you, John. Hey, so we know each other you've been on before. Um, you know, networking, you know, compute, you know the industry. You're now the CEO of Hazel cast. So first of all, what does Hazelcast do? And then we can get into some of the cool things. Hazel cast is an in memory compute platform. >>So we're a kind of a neutral platform. You write your applications to us. We sit in front of things like databases and stream, uh, streaming sources, uh, and we execute applications at microsecond speeds, which is really, really important as we move more and more towards digital and AI. Uh, so basically when, when time matters, when time is money people buy Hazelcast. So I've got to ask you your interest in better, you can do a lot of different things. You can run any companies you want. Why Hazelcast what attracted you to this company? What was unique about it that got your attention and what made you join the firm? Well, when I first started looking at it and realized that a hundred of the world's largest companies are their customers and this company really was kind of kind of a run silent run deep company. A lot of people didn't know about it. >>Um, I could not, I had this dissonance, like how can this possibly be the case? Well, it turns out, uh, if you go into the Java developer world, the name is like Kleenex. Everybody knows Hazelcast because of the open source adoption of it, which has gone viral a long, long time ago. So once I started realizing what they had and why people were buying it, and I looked at that, that problem statement and the problem statement is really increasing with digitalization. So the more things are speeding up, the more applications have to perform at really, really low latency. So there was this big big growth market opportunity and Hazel CAS clearly had the had the drop on the market. So I've got to ask you, so we're at RSA and I mentioned on my intro here the speed of business while he's been down the, it kind of cliche moving at the speed of business, but now business has to move as the speed of how to react to some of the large scale things, whether it's compute power, cloud computing, and obviously cyber is attacks and a response. >>How do you view that and how are you guys attacking that problem? Well, you know, it's funny. I think the first time I truly understood security was the day that I was shopping for a home safe. You know, because I realized that all of these safes, they all were competing on one of the common metric, which is the meantime to break in, right? Is that you had one job and all you can tell me is that it's going to happen eventually, you know? So the kind of the scales got peeled off my eyes and I realized that, that when it comes to security, the only common factor is elapsed time, you know, and uh, so the last time is what matters. And then the second thing is that time is relative. It's relative to the speed of the attack. You know, if I'm just trying to protect my goods in a safe, the last amount of time is how long it can take for the bad guy to break into the safe. >>But now we're working at digital speeds, you know, so, you know, you take a second break that down to a thousand, uh, that's uh, you know, milliseconds. It takes 300 milliseconds. The blink. Yeah. Now we're working at microsecond speeds. Uh, and we're finding that there are just a really rapidly growing number of transactions that have to perform at that scale and that, and that speed. Um, you know, it, it may have escaped people completely, but card processing, credit card, debit card processing, ever Dawn on you that that's an IOT application now. Yeah, because my phone is a terminal. Amazon's a giant terminal number of transactions go up. They have three milliseconds to decide whether or not they're going to approve that. And uh, now with using Hazel Cassady not just handle it within that three milliseconds, but they also are running multiple fraud detection algorithms in that same window. >>Okay. So I get it now. That's why the in-memory becomes critical. You can't gotta be in memory. Okay, so I got to ask the next logical question, which is okay, I get that it makes less sense and I want to dig into that in a second. But let's go to the application developer. Okay, I'm doing dev ops, I'm doing cloud. I'm cool. Right? So now you just wake me up and say, wait a minute, I'm not dealing with nanosecond latency. What do I do? Like what's I mean, who's, how to applications respond to that kind of attack velocity? Well, it's not a not a a an evolution. So the application is written to Hazel cast is very, very simple to do. Um, there are, uh, like 60 million Hazel cast cluster starts every month. So people out in the wild are doing this all day long and we're really big in the Java developer community, but not only Java. >>Um, and so it's very, very straightforward with how to write your application and pointed at Hazelcast instead of pointing at the database behind us. Uh, so that part is actually very, very simple. All right, so take me through, I get the market space you're going after. It makes total sense. You run the, I think the right wave in my opinion. Business model product, how you guys organize, how do people sign up for our development and the development side? Who's your buyer? What's the business look like? Share a Hazel. Cast a one-on-one. Yeah. So we're an open core model, meaning the core engine is open source and fully downloadable and you know, free to use, uh, the additional functionality is the commercial aspect of it, which are tend to be features that are used when you're really going into, into sensitive and large scale deployments. Um, so the developers have access to a, they just come to hazelcast.org and uh, and join the community that way. >>Um, the people that we engage with are everyone from the developer all the way up through the architect and then the a C level member who's charged with standing up whatever this new capability is. So we talk up and down that chain, um, where you're a very, very technical company. Uh, but we've got a very, very powerful RLM. What's the developer makeup look like? Is it a software developer? Is it an engineer? So what's the makeup of the, of the developer? They're core application developers. Um, a lot in Java, increasingly in.net, uh, as a MLM AR coming on, we're getting a lot of Python. Uh, so it's, it's developers with that skillset and they're basically, uh, writing an application that they're, um, uh, basically their division is specified. So we need this new application. It could be a new application for a customer engagement and application for fraud detection and an application for stock trading. >>Anything that's super, super time sensitive and, uh, they, they select us and they build on us. So you get the in-memory solution for developers. Take me through the monetization on the open core. Is it services? Is it, uh, it's a subscription. It's a subscription model. So, okay. Uh, w we are paid on a, on a annual basis, uh, for, for use of the software. And um, you know, however large the installation gets is a function that basically determines, uh, you know, what the price is and then it's just renewed annually. Awesome. We'll do good subscriptions. Good economics. It is. What about the secret sauce? What sun in the cut was on the covers? Can you share what the magic is or is it proprietary? Is that now what's, uh, it's, it's hardcore computer science. It really is what it is. And that is actually what is in the core engine. >>Um, but I mean, we've got PhDs on staff. We're tackling some really, really hard problems. You know, I can, I can build anything in memory. I can make a spreadsheet in memory, I can make a word processor and memory. But you know, the question is how good is it? How fast is it, how scalable, how resilient. So, um, you know, those, you're saying speed, resilience and scale are the foundation and it took the company years and years to be able to master this. That's an asymptotic attempt and you're never at the end of that. But we've got, you know, the most resilience, so something, it doesn't go down. It can't go down because our customers lose millions for every second that it's down. So it can't go down. It's got to scale. And it's gotta be low latency number of customers you guys have right now. Can you tell us about the public references and why they using Hazelcast and what did they say about it? >>Yeah, I mean we've got a hundred of the largest, uh, financial services, about 60% of our revenue. E-commerce is a, another 20% large telcos. Another job. There are a lot of IOPS type companies, right? Yeah. Basically it's, um, so you know, in the financial services, uh, it's all the names that you would know, uh, every logo in your wallet. It's probably one of our customers as an example. Um, massive banks, uh, card processors, uh, we don't get to talk about very many of them, but you know, something like national bank of Australia, uh, capital one, um, you know, you can, you can let your, your mind run there. Um, our largest customer has over a trillion dollar market cap. There's only a few that meet that criteria. So I'll let you on that one. One of the three. Um, all right, so what's next for you guys? >>Give the quick plug in. The company would appreciate the insights. I think he'd memory's hot. What do you guys are going to do? What's your growth strategy? Uh, what's, what do you, what's your priorities? The CEO? Yeah. Well, we just raised a $50 million round, which is a very, very significant round. Um, and we're putting that to work aggressively. We just came off the biggest quarter in the company's history. So we're really on fire right now. Uh, we've established a very strong technology partnership with Intel, uh, including specialty because of their AI initiatives. Because we power a lot of AI, uh, uh, applications. IBM has become a strategic partner. They're now reselling Hazelcast. Uh, so we've got a bunch of, uh, a bunch of wind in our sails right now coming into this year, what we're going to be doing is, uh, really delivering a full blown, uh, in memory compute platform that delivers, that can process stored and streaming data simultaneously. >>Nothing else on the planet can do that. We're finding some really innovative applications and, um, you know, we're just really, really working on market penetration right now. You know, when you see all these supply chain hacks out there, you're going to look at more in memory detection, prevention, counter strike, you know, all this provision things you got to take care of. Mean applications have to now respond. It's almost like a whole new SLA for application requirement. Yeah, it is. I mean, the bad guys are moving to digital speed, you know, if you have important apps that, uh, that are affected by that. Right. You know, you'd better get ahead of that. Well, actually you could be doing that, by the way. You can be doing that on your, on premise or you could be doing in the cloud with the managed service that we've also stood up while still we get the Cuban in, in memory Africa and when we were there, I will be happy. Kelly, congratulations on the funding. Looking forward to tracking you. We'll follow up and check in with you guys. All right. Congratulations. Awesome. Thanks John. I appreciate it. Okay. It's keep coverage here in San Francisco, the Moscone. I'm John furrier. Thanks for watching.
SUMMARY :
RSA conference, 2020 San Francisco brought to you by Silicon Um, you know, to ask you your interest in better, you can do a lot of different things. it turns out, uh, if you go into the Java developer world, the name is like Kleenex. the only common factor is elapsed time, you know, and uh, But now we're working at digital speeds, you know, so, you know, you take a second break that down to a thousand, So now you just wake me up and say, wait a minute, Um, so the developers have access to a, they just come to hazelcast.org and uh, Um, the people that we engage with are everyone from the developer all the way up through the architect and then the determines, uh, you know, what the price is and then it's just renewed annually. But we've got, you know, the most resilience, so something, it doesn't go down. so you know, in the financial services, uh, it's all the names that you would know, uh, every logo in your wallet. What do you guys are going to do? I mean, the bad guys are moving to digital speed, you know, if you have important apps that,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kelly | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Kelly Herrell | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Kelly Kelly Harrell | PERSON | 0.99+ |
Kleenex | ORGANIZATION | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
$50 million | QUANTITY | 0.99+ |
300 milliseconds | QUANTITY | 0.99+ |
Hazelcast | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
hazelcast.org | OTHER | 0.99+ |
Python | TITLE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
RSAC | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
60 million | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
RSA | EVENT | 0.98+ |
three milliseconds | QUANTITY | 0.98+ |
about 60% | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Africa | LOCATION | 0.97+ |
Moscone South | LOCATION | 0.97+ |
one job | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
first time | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
Silicon angle | ORGANIZATION | 0.93+ |
Hazel cast | TITLE | 0.92+ |
Hazel | TITLE | 0.9+ |
Moscone | LOCATION | 0.9+ |
Cuban | LOCATION | 0.86+ |
Australia | LOCATION | 0.85+ |
John furrier | PERSON | 0.85+ |
a second | QUANTITY | 0.81+ |
2020 | ORGANIZATION | 0.8+ |
every second | QUANTITY | 0.8+ |
hundred | QUANTITY | 0.8+ |
RSA conference | EVENT | 0.78+ |
Hazelcast | TITLE | 0.76+ |
over a trillion dollar | QUANTITY | 0.76+ |
2020 | DATE | 0.71+ |
one of | QUANTITY | 0.7+ |
second break | QUANTITY | 0.69+ |
USA | LOCATION | 0.67+ |
a thousand | QUANTITY | 0.65+ |
a minute | QUANTITY | 0.63+ |
Hazel Cassady | TITLE | 0.61+ |
Hazel cast | ORGANIZATION | 0.61+ |
CAS | ORGANIZATION | 0.54+ |
Hazel | PERSON | 0.53+ |
nanosecond | QUANTITY | 0.51+ |