Breaking Analysis: Why Apple Could be the Key to Intel's Future
>> From theCUBE studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante >> The latest Arm Neoverse announcement further cements our opinion that it's architecture business model and ecosystem execution are defining a new era of computing and leaving Intel in it's dust. We believe the company and its partners have at least a two year lead on Intel and are currently in a far better position to capitalize on a major waves that are driving the technology industry and its innovation. To compete our view is that Intel needs a new strategy. Now, Pat Gelsinger is bringing that but they also need financial support from the US and the EU governments. Pat Gelsinger was just noted as asking or requesting from the EU government $9 billion, sorry, 8 billion euros in financial support. And very importantly, Intel needs a volume for its new Foundry business. And that is where Apple could be a key. Hello, everyone. And welcome to this week's weekly bond Cube insights powered by ETR. In this breaking analysis will explain why Apple could be the key to saving Intel and America's semiconductor industry leadership. We'll also further explore our scenario of the evolution of computing and what will happen to Intel if it can't catch up. Here's a hint it's not pretty. Let's start by looking at some of the key assumptions that we've made that are informing our scenarios. We've pointed out many times that we believe Arm wafer volumes are approaching 10 times those of x86 wafers. This means that manufacturers of Arm chips have a significant cost advantage over Intel. We've covered that extensively, but we repeat it because when we see news reports and analysis and print it's not a factor that anybody's highlighting. And this is probably the most important issue that Intel faces. And it's why we feel that Apple could be Intel's savior. We'll come back to that. We've projected that the chip shortage will last no less than three years, perhaps even longer. As we reported in a recent breaking analysis. Well, Moore's law is waning. The result of Moore's law, I.e the doubling of processor performance every 18 to 24 months is actually accelerating. We've observed and continue to project a quadrupling of performance every two years, breaking historical norms. Arm is attacking the enterprise and the data center. We see hyperscalers as the tip of their entry spear. AWS's graviton chip is the best example. Amazon and other cloud vendors that have engineering and software capabilities are making Arm-based chips capable of running general purpose applications. This is a huge threat to x86. And if Intel doesn't quickly we believe Arm will gain a 50% share of an enterprise semiconductor spend by 2030. We see the definition of Cloud expanding. Cloud is no longer a remote set of services, in the cloud, rather it's expanding to the edge where the edge could be a data center, a data closet, or a true edge device or system. And Arm is by far in our view in the best position to support the new workloads and computing models that are emerging as a result. Finally geopolitical forces are at play here. We believe the U S government will do, or at least should do everything possible to ensure that Intel and the U S chip industry regain its leadership position in the semiconductor business. If they don't the U S and Intel could fade to irrelevance. Let's look at this last point and make some comments on that. Here's a map of the South China sea in a way off in the Pacific we've superimposed a little pie chart. And we asked ourselves if you had a hundred points of strategic value to allocate, how much would you put in the semiconductor manufacturing bucket and how much would go to design? And our conclusion was 50, 50. Now it used to be because of Intel's dominance with x86 and its volume that the United States was number one in both strategic areas. But today that orange slice of the pie is dominated by TSMC. Thanks to Arm volumes. Now we've reported extensively on this and we don't want to dwell on it for too long but on all accounts cost, technology, volume. TSMC is the clear leader here. China's president Xi has a stated goal of unifying Taiwan by China's Centennial in 2049, will this tiny Island nation which dominates a critical part of the strategic semiconductor pie, go the way of Hong Kong and be subsumed into China. Well, military experts say it was very hard for China to take Taiwan by force, without heavy losses and some serious international repercussions. The US's military presence in the Philippines and Okinawa and Guam combined with support from Japan and South Korea would make it even more difficult. And certainly the Taiwanese people you would think would prefer their independence. But Taiwanese leadership, it ebbs and flows between those hardliners who really want to separate and want independence and those that are more sympathetic to China. Could China for example, use cyber warfare to over time control the narrative in Taiwan. Remember if you control the narrative you can control the meme. If you can crawl the meme you control the idea. If you control the idea, you control the belief system. And if you control the belief system you control the population without firing a shot. So is it possible that over the next 25 years China could weaponize propaganda and social media to reach its objectives with Taiwan? Maybe it's a long shot but if you're a senior strategist in the U S government would you want to leave that to chance? We don't think so. Let's park that for now and double click on one of our key findings. And that is the pace of semiconductor performance gains. As we first reported a few weeks ago. Well, Moore's law is moderating the outlook for cheap dense and efficient processing power has never been better. This slideshows two simple log lines. One is the traditional Moore's law curve. That's the one at the bottom. And the other is the current pace of system performance improvement that we're seeing measured in trillions of operations per second. Now, if you calculate the historical annual rate of processor performance improvement that we saw with x86, the math comes out to around 40% improvement per year. Now that rate is slowing. It's now down to around 30% annually. So we're not quite doubling every 24 months anymore with x86 and that's why people say Moore's law is dead. But if you look at the (indistinct) effects of packaging CPU's, GPU's, NPUs accelerators, DSPs and all the alternative processing power you can find in SOC system on chip and eventually system on package it's growing at more than a hundred percent per annum. And this means that the processing power is now quadrupling every 24 months. That's impressive. And the reason we're here is Arm. Arm has redefined the core process of model for a new era of computing. Arm made an announcement last week which really recycle some old content from last September, but it also put forth new proof points on adoption and performance. Arm laid out three components and its announcement. The first was Neoverse version one which is all about extending vector performance. This is critical for high performance computing HPC which at one point you thought that was a niche but it is the AI platform. AI workloads are not a niche. Second Arm announced the Neoverse and two platform based on the recently introduced Arm V9. We talked about that a lot in one of our earlier Breaking Analysis. This is going to performance boost of around 40%. Now the third was, it was called CMN-700 Arm maybe needs to work on some of its names, but Arm said this is the industry's most advanced mesh interconnect. This is the glue for the V1 and the N2 platforms. The importance is it allows for more efficient use and sharing of memory resources across components of the system package. We talked about this extensively in previous episodes the importance of that capability. Now let's share with you this wheel diagram underscores the completeness of the Arm platform. Arms approach is to enable flexibility across an open ecosystem, allowing for value add at many levels. Arm has built the architecture in design and allows an open ecosystem to provide the value added software. Now, very importantly, Arm has created the standards and specifications by which they can with certainty, certify that the Foundry can make the chips to a high quality standard, and importantly that all the applications are going to run properly. In other words, if you design an application, it will work across the ecosystem and maintain backwards compatibility with previous generations, like Intel has done for years but Arm as we'll see next is positioning not only for existing workloads but also the emerging high growth applications. To (indistinct) here's the Arm total available market as we see it, we think the end market spending value of just the chips going into these areas is $600 billion today. And it's going to grow to 1 trillion by 2030. In other words, we're allocating the value of the end market spend in these sectors to the marked up value of the Silicon as a percentage of the total spend. It's enormous. So the big areas are Hyperscale Clouds which we think is around 20% of this TAM and the HPC and AI workloads, which account for about 35% and the Edge will ultimately be the largest of all probably capturing 45%. And these are rough estimates and they'll ebb and flow and there's obviously some overlap but the bottom line is the market is huge and growing very rapidly. And you see that little red highlighted area that's enterprise IT. Traditional IT and that's the x86 market in context. So it's relatively small. What's happening is we're seeing a number of traditional IT vendors, packaging x86 boxes throwing them over the fence and saying, we're going after the Edge. And what they're doing is saying, okay the edge is this aggregation point for all these end point devices. We think the real opportunity at the Edge is for AI inferencing. That, that is where most of the activity and most of the spending is going to be. And we think Arm is going to dominate that market. And this brings up another challenge for Intel. So we've made the point a zillion times that PC volumes peaked in 2011. And we saw that as problematic for Intel for the cost reasons that we've beat into your head. And lo and behold PC volumes, they actually grew last year thanks to COVID and we'll continue to grow it seems for a year or so. Here's some ETR data that underscores that fact. This chart shows the net score. Remember that's spending momentum it's the breakdown for Dell's laptop business. The green means spending is accelerating and the red is decelerating. And the blue line is net score that spending momentum. And the trend is up and to the right now, as we've said this is great news for Dell and HP and Lenovo and Apple for its laptops, all the laptops sellers but it's not necessarily great news for Intel. Why? I mean, it's okay. But what it does is it shifts Intel's product mix toward lower margin, PC chips and it squeezes Intel's gross margins. So the CFO has to explain that margin contraction to wall street. Imagine that the business that got Intel to its monopoly status is growing faster than the high margin server business. And that's pulling margins down. So as we said, Intel is fighting a war on multiple fronts. It's battling AMD in the core x86 business both PCs and servers. It's watching Arm mop up in mobile. It's trying to figure out how to reinvent itself and change its culture to allow more flexibility into its designs. And it's spinning up a Foundry business to compete with TSMC. So it's got to fund all this while at the same time propping up at stock with buybacks Intel last summer announced that it was accelerating it's $10 billion stock buyback program, $10 billion. Buy stock back, or build a Foundry which do you think is more important for the future of Intel and the us semiconductor industry? So Intel, it's got to protect its past while building his future and placating wall street all at the same time. And here's where it gets even more dicey. Intel's got to protect its high-end x86 business. It is the cash cow and funds their operation. Who's Intel's biggest customer Dell, HP, Facebook, Google Amazon? Well, let's just say Amazon is a big customer. Can we agree on that? And we know AWS is biggest revenue generator is EC2. And EC2 was powered by microprocessors made from Intel and others. We found this slide in the Arm Neoverse deck and it caught our attention. The data comes from a data platform called lifter insights. The charts show, the rapid growth of AWS is graviton chips which are they're custom designed chips based on Arm of course. The blue is that graviton and the black vendor A presumably is Intel and the gray is assumed to be AMD. The eye popper is the 2020 pie chart. The instant deployments, nearly 50% are graviton. So if you're Pat Gelsinger, you better be all over AWS. You don't want to lose this customer and you're going to do everything in your power to keep them. But the trend is not your friend in this account. Now the story gets even gnarlier and here's the killer chart. It shows the ISV ecosystem platforms that run on graviton too, because AWS has such good engineering and controls its own stack. It can build Arm-based chips that run software designed to run on general purpose x86 systems. Yes, it's true. The ISV, they got to do some work, but large ISV they have a huge incentives because they want to ride the AWS wave. Certainly the user doesn't know or care but AWS cares because it's driving costs and energy consumption down and performance up. Lower cost, higher performance. Sounds like something Amazon wants to consistently deliver, right? And the ISV portfolio that runs on our base graviton and it's just going to continue to grow. And by the way, it's not just Amazon. It's Alibaba, it's Oracle, it's Marvell. It's 10 cents. The list keeps growing Arm, trotted out a number of names. And I would expect over time it's going to be Facebook and Google and Microsoft. If they're not, are you there? Now the last piece of the Arm architecture story that we want to share is the progress that they're making and compare that to x86. This chart shows how Arm is innovating and let's start with the first line under platform capabilities. Number of cores supported per die or, or system. Now die is what ends up as a chip on a small piece of Silicon. Think of the die as circuit diagram of the chip if you will, and these circuits they're fabricated on wafers using photo lithography. The wafers then cut up into many pieces each one, having a chip. Each of these pieces is the chip. And two chips make up a system. The key here is that Arm is quadrupling the number of cores instead of increasing thread counts. It's giving you cores. Cores are better than threads because threads are shared and cores are independent and much easier to virtualize. This is particularly important in situations where you want to be as efficient as possible sharing massive resources like the Cloud. Now, as you can see in the right hand side of the chart under the orange Arm is dramatically increasing the amount of capabilities compared to previous generations. And one of the other highlights to us is that last line that CCIX and CXL support again Arm maybe needs to name these better. These refer to Arms and memory sharing capabilities within and between processors. This allows CPU's GPU's NPS, et cetera to share resources very often efficiently especially compared to the way x86 works where everything is currently controlled by the x86 processor. CCIX and CXL support on the other hand will allow designers to program the system and share memory wherever they want within the system directly and not have to go through the overhead of a central processor, which owns the memory. So for example, if there's a CPU, GPU, NPU the CPU can say to the GPU, give me your results at a specified location and signal me when you're done. So when the GPU is finished calculating and sending the results, the GPU just signals the operation is complete. Versus having to ping the CPU constantly, which is overhead intensive. Now composability in that chart means the system it's a fixed. Rather you can programmatically change the characteristics of the system on the fly. For example, if the NPU is idle you can allocate more resources to other parts of the system. Now, Intel is doing this too in the future but we think Arm is way ahead. At least by two years this is also huge for Nvidia, which today relies on x86. A major problem for Nvidia has been coherent memory management because the utilization of its GPU is appallingly low and it can't be easily optimized. Last week, Nvidia announced it's intent to provide an AI capability for the data center without x86 I.e using Arm-based processors. So Nvidia another big Intel customer is also moving to Arm. And if it's successful acquiring Arm which is still a long shot this trend is only going to accelerate. But the bottom line is if Intel can't move fast enough to stem the momentum of Arm we believe Arm will capture 50% of the enterprise semiconductor spending by 2030. So how does Intel continue to lead? Well, it's not going to be easy. Remember we said, Intel, can't go it alone. And we posited that the company would have to initiate a joint venture structure. We propose a triumvirate of Intel, IBM with its power of 10 and memory aggregation and memory architecture And Samsung with its volume manufacturing expertise on the premise that it coveted in on US soil presence. Now upon further review we're not sure the Samsung is willing to give up and contribute its IP to this venture. It's put a lot of money and a lot of emphasis on infrastructure in South Korea. And furthermore, we're not convinced that Arvind Krishna who we believe ultimately made the call to Jettisons. Jettison IBM's micro electronics business wants to put his efforts back into manufacturing semi-conductors. So we have this conundrum. Intel is fighting AMD, which is already at seven nanometer. Intel has a fall behind in process manufacturing which is strategically important to the United States it's military and the nation's competitiveness. Intel's behind the curve on cost and architecture and is losing key customers in the most important market segments. And it's way behind on volume. The critical piece of the pie that nobody ever talks about. Intel must become more price and performance competitive with x86 and bring in new composable designs that maintain x86 competitive. And give the ability to allow customers and designers to add and customize GPU's, NPUs, accelerators et cetera. All while launching a successful Foundry business. So we think there's another possibility to this thought exercise. Apple is currently reliant on TSMC and is pushing them hard toward five nanometer, in fact sucking up a lot of that volume and TSMC is maybe not servicing some other customers as well as it's servicing Apple because it's a bit destructive, it is distracted and you have this chip shortage. So Apple because of its size gets the lion's share of the attention but Apple needs a trusted onshore supplier. Sure TSMC is adding manufacturing capacity in the US and Arizona. But back to our precarious scenario in the South China sea. Will the U S government and Apple sit back and hope for the best or will they hope for the best and plan for the worst? Let's face it. If China gains control of TSMC, it could block access to the latest and greatest process technology. Apple just announced that it's investing billions of dollars in semiconductor technology across the US. The US government is pressuring big tech. What about an Apple Intel joint venture? Apple brings the volume, it's Cloud, it's Cloud, sorry. It's money it's design leadership, all that to the table. And they could partner with Intel. It gives Intel the Foundry business and a guaranteed volume stream. And maybe the U S government gives Apple a little bit of breathing room and the whole big up big breakup, big tech narrative. And even though it's not necessarily specifically targeting Apple but maybe the US government needs to think twice before it attacks big tech and thinks about the long-term strategic ramifications. Wouldn't that be ironic? Apple dumps Intel in favor of Arm for the M1 and then incubates, and essentially saves Intel with a pipeline of Foundry business. Now back to IBM in this scenario, we've put a question mark on the slide because maybe IBM just gets in the way and why not? A nice clean partnership between Intel and Apple? Who knows? Maybe Gelsinger can even negotiate this without giving up any equity to Apple, but Apple could be a key ingredient to a cocktail of a new strategy under Pat Gelsinger leadership. Gobs of cash from the US and EU governments and volume from Apple. Wow, still a long shot, but one worth pursuing because as we've written, Intel is too strategic to fail. Okay, well, what do you think? You can DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn post. Remember, these episodes are all available as podcasts so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com. And don't forget to check out etr.plus for all the survey analysis. And I want to thank my colleague, David Floyer for his collaboration on this and other related episodes. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching, be well, and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and most of the spending is going to be.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat Gelsinger | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
50% | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
$600 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
10 cents | QUANTITY | 0.99+ |
South Korea | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Last week | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
U S | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
1 trillion | QUANTITY | 0.99+ |
2030 | DATE | 0.99+ |
Marvell | ORGANIZATION | 0.99+ |
China | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Moore | PERSON | 0.99+ |
$9 billion | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
EU | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
Okinawa | LOCATION | 0.99+ |
last September | DATE | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Daniel Berg, IBM Cloud & Norman Hsieh, LogDNA | KubeCon 2018
>> Live from Seattle, Washington it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Hey, welcome back everyone, it's theCUBE live here in Seattle for day three of three of wall-to-wall coverage. We've been analyzing here on theCUBE for three days, talking to all the experts, the CEOs, CTOs, developers, startups. I'm John Furrier, Stu Miniman, with theCUBE coverage of here at dock, not DockerCon, KubeCon and CloudNativeCon. Getting down to the last Con. >> So close, John, so close. >> Lot of Docker containers around here. We'll check it on the Kubernetes. Our next two guests got a startup, hot startup here. You got Norman Hsieh, head of business development, LogDNA. New compelling solution on Kubernetes give them a unique advantage, and of course, Daniel Berg who's distinguished engineer at IBM. They have a deal. We're going to talk about the startup and the deal with IBM. The highlights, kind of a new model, a new world's developing. Thanks for joining us. >> Yeah, no problem, thanks for having us. >> May get you on at DockerCon sometimes. (Daniel laughing) Get you DockerCon. The container certainly been great, talk about your product first. Let's get your company out there. What do you guys do? You got something new and different. Something needed. What's different about it? >> Yeah, so we started building this product. One thing we were trying to do is finding a login solution that was built for developers, especially around DevOps. We were running our own multi-tenant SaaS product at the time and we just couldn't find anything great. We tried open source Elastic and it turned out to be a lot to manage, there was a lot of configuration we had to do. We tried a bunch of the other products out there which were mostly built for log analysis, so you'd analyze logs, maybe a week or two after, and there was nothing just realtime that we wanted, and so we decided to build our own. We overcame a lot of challenges where we just felt that we could build something that was easier to use than what was out there today. Our philosophy is for developers in the terms of we want to make it as simple as possible. We don't want you to manage where you're going to think about how logs work today. And so, the whole idea, even you can go down to some of the integrations that we have, our Kubernetes integration's two lines. You essentially hit two QCTL lines, your entire cluster will get logged, directly logged in in seconds. That's something we show often times at demos as well. >> Norman, I wonder if you can drill in a little bit more for us. Always look at is a lot of times the new generation, they've got just new tools to play with and new things to do. What was different, what changes? Just the composability and what a small form factor. I would think that you could just change the order of magnitude in some of the pricing of some of these. Tell us why it's different. >> Yeah, I mean, I think there's, three major things was speed. So what we found was that there weren't a lot of solutions that were optimized really, really well for finding logs. There were a lot of log solutions out there, but we wanted to optimize that so we fine-tuned Elasticsearch. We do a lot of stuff around there to make that experience really pleasurable for our users. The other is scale. So we're noticing now is if you kind of expand on the world of back in the day we had single machines that people got logs off of, then you went to VMware where you're taking a single machine and splitting up to multiple different things, and now you have containers, and all of a sudden you have Kubernetes, you're talking about thousands and thousands of nodes running and large production service. How do you find logs in those things? And so we really wanted to build for that scale and that usability where, for Kubernetes, we'll automatically tag all your logs coming through. So you might get a single log line, but we'll tag it with all the meta-data you need to find exactly what you want. So if I want to, if my container dies and I no longer know that containers around, how am I going to get the logs off of that, well, you can go to LogDNA, find the container that you're looking for, know exactly where that error's coming from as well. >> So you're basically storing all this data, making it really easy for the integration piece. Where does the IBM relationship fit in? What's the partnership? What are you guys doing together? >> I don't know if Dan wants to-- >> Go ahead, go ahead. >> Yeah, so we're partnering with IBM. We are one of their major partners for login. So if you go into Observability tab under IMB Cloud and click on Login, login is there, you can start the login instance. What we've done is, IBM's brought us a great opportunity where we could take our product and help benefit their own customers and also IBM themselves with a lot of the login that we do. They saw that we are very simplistic way of thinking about logs and it was really geared towards when you think about IBM Cloud and the shift that they're moving towards, which is really developer-focused, it was a really, really good match for us. It brought us the visibility into the upmarket with larger customers and also gives us the ability to kind of deploy globally across IBM Cloud as well. >> I mean, IBMs got a great channel on the sales side too, and you guys got a great relationship. We've seen that playbook before where I think we've interviewed in all the other events with IBM. Startups can really, if they fit in with IBM, it's just massive, but what's the reason? Why the partnership? Explain. >> Well, I mean, first of all we were looking for a solution, a login solution, that fit really well with IKS, our Kubernetes service. And it's cloud-native, high scale, large number of cluster, that's what our customers are building. That's what we want to use internally as well. I mean, we were looking for a very robust cloud-native login service that we could use ourselves, and that's when we ran across these guys. What, about a year ago? >> Yeah, I mean, I think we kind of first got introduced at last year's KubeCon and then it went to Container World, and we just kept seeing each other. >> And we just kept on rolling with it so what we've done with that integration, what's nice about the integration, is it's directly in the catalog. So it's another service in the catalog, you go and select it, and provision it very easily. But what's really cool about it is we wanted to have that integration directly with the Kubernetes services as well, so there's the tab on the Integration tab on the Kubernetes, literally one button, two lines of code that you just have to execute, bam! All your logs are now streaming for the entire cluster with all the index and everything. It just makes it a really nice, rich experience to capture your logs. >> This is infrastructure as code, that's what the promise was. >> Absolutely, yes. >> You have very seamless integration and the backend just works. Now talk about the Kubernetes pieces. I think this is fascinating 'cause we've been pontificating and evaluating all the commentary here in theCUBE, and we've come to the conclusion that cloud's great, but there's other new platform-like things emerging. You got Edge and all these things, so there's a whole new set, new things are going to come up, and it's not going to be just called cloud, it's going to be something else. There's Edge, you got cameras, you got data, you got all kinds of stuff going on. Kubernetes seems to fit a lot of these emerging use cases. Where does the Kubernetes fit in? You say you built on Kubernetes, just why is that so important? Explain that one piece. >> Yeah, I mean, I think there's, Kubernetes obviously brought a lot of opportunities for us. The big differentiator for us was because we were built on Kubernetes from the get go, we made that decision a long time ago, we didn't realize we could actually deploy this package anywhere. It didn't have to be, we didn't have to just run as a multi-tenant SaaS product anymore and I think part of that is for IBM, their customers are actually running, when they're talking about an integrated login service, we're actually running on IBM Cloud, so their customers can be sure that the data doesn't actually move anywhere else. It's going to stay in IBM Cloud and-- >> This is really important and because they're on the Kubernetes service, it gives them the opportunity, running on Kubernetes, running automatic service, they're going to be able to put LogDNA in each of the major regions. So customer will be able to keep their logged data in the regions that they want it to stay. >> Great for compliance. >> Absolutely. >> I mean, compliance, dreams-- >> Got to have it. >> Especially with EU. >> How about search and discovery, that's fit in too? Just simple, what's your strategy on that? >> Yeah, so our strategy is if you look at a lot of the login solutions out there today, a lot of times they require you to learn complex query languages and things like that. And so the biggest thing we were hearing was like, man, onboarding is really hard because some of our developers don't look at logs on a daily basis. They look at it every two weeks. >> Jerry Chen from Greylock Ventures said machine learning is the new, ML is the new SQL. >> Yup. (Daniel laughing) >> To your point, this complex querying is going to be automated away. >> Yup. >> Yes. >> And you guys agree with that. >> Oh, yeah. >> You actually, >> Totally agree with that. >> you talked about it on our interview. >> Norman, wonder if you can bring us in a little bit of compliance and what discussions you're having with customers. Obviously GDPR, big discussion point we had. We've got new laws coming from California soon. So how important is this to your customers, and what's the reality kind of out there in your user base? >> Yeah, compliance was, our founders had run a lot of different businesses before. They had two major startups where they worked with eBay, compliance was the big thing, so we made a decision early on to say, hey, look, we're about 50 people right now, let's just do compliance now. I've been at startups where we go, let's just keep growing and growing and we'll worry about compliance later-- >> Yeah, bite you in the ass, big time. >> Yeah, we made a decision to say, hey, look, we're smaller, let's just implement all the processes and necessary needs, so. >> Well, the need's there too, that's two things, right? I mean, get it out early. Like security, build it up front and you got it in. >> Exactly. >> And remember earlier we were talking and I was telling you how within the Kubernetes service we like to use our own services to build expertise? It's the same thing here. Not only are they running on top of IKS, we're using LogDNA to manage the logs and everything, and cross the infrastructure for IKS as well. So we're heavily using it. >> This also highlights, Daniel, the ecosystem dynamic of having when you break down this monolithic type of environments and their sets of services, you benefit because you can tap into a startup, they can tap in to IBM's goodness. It's like somewhat simple Biz Dev deal other than the RevShare component of the sales, but technically, this is what customers want at the endgame is they want the right tool, the right job, the right product. If it comes from a startup, you guys don't have to build it. >> I mean, exactly. Let the experts do it, we'll integrate it. It's a great relationship. And the teams work really well together which is fantastic. >> What do you guys do with other startups? If a startup watches and says, hey, I want to be like LogDNA. I want to plug into IBM's Cloud. I want to be just like them and make all that cash. What do they got to do? What's the model? >> I mean, we're constantly looking at startups and new business opportunities obviously. We do this all the time. But it's got to be the right fit, alright? And that's important. It's got to be the right fit with the technology, it's got to be the right fit as far as culture, and team dynamics of not only my team but the startup's teams and how we're going to work together, and this is why it worked really great with LogDNA. I mean, everything, it just all fit, it all made sense, and it had a good business model behind that as well. So, yes, there's opportunities for others but we have to go through and explore all those. >> So, Norman, wonder if you can share, how's your experience been at the show here? We'd love to hear, you're going to have so many startups here. You got record-setting attendance for the show. What were your expectations coming in? What are the KPIs you're measuring with and how has it met what you thought you were going to get? >> No, it's great, I mean, previous to the last year's KubeCon we had not really done any events. We're a small company, we didn't want to spend the resources, but we came in last year and I think what was refreshing was people would talk to us and we're like, oh, yeah, we're not an open source technology, we're actually a log vendor and we can, and we'll-- (Stu laughing) So what we said was, hey, we'll brush that into an experience, and people were like, oh, wow, this is actually pretty refreshing. I'm not configuring my fluentd system, fluentd to tap into another Elasticsearch. There was just not a lot of that. I think this year expectation was we need the size doubled. We still wanted to get the message out there. We knew we were hot off the presses with the IMB public launch of our service on IBM Cloud. And I think we we're expecting a lot. I mean, we more than doubled what our lead count was and it's been an amazing conference. I mean, I think the energy that you get and the quality of folks that come by, it's like, yeah, everybody's running Kubernetes, they know what they're talking about, and it makes that conversation that much easier for us as well. >> Now you're CUBE alumni now too. It's the booth, look at that. (everyone laughing) Well, guys, thanks for coming on, sharing the insight. Good to see you again. Great commentary, again, having distinguished engineering, and these kinds of conversations really helps the community figure out kind of what's out there, so I appreciate that. And if everything's going to be on Kubernetes, then we should put theCUBE on Kubernetes. With these videos, we'll be on it, we'll be out there. >> Hey, yeah, absolutely, that'd be great. >> TheCUBE covers day three. Breaking it down here. I'm John Furrier, Stu Miniman. That's a wrap for us here in Seattle. Thanks for watching and look for us next year, 2019. That's a wrap for 2018, Stu, good job. Thanks for coming on, guys, really appreciate it. >> Thanks. >> Thank you. >> Thanks for watching, see you around. (futuristic instrumental music)
SUMMARY :
Brought to you by Red Hat, the CEOs, CTOs, developers, startups. We're going to talk about the startup and the deal with IBM. What do you guys do? And so, the whole idea, even you can go down and new things to do. and all of a sudden you have Kubernetes, What are you guys doing together? about IBM Cloud and the shift that they're moving towards, and you guys got a great relationship. Well, I mean, first of all we were looking for a solution, Yeah, I mean, I think we kind of first got introduced And we just kept on rolling with it so what we've done that's what the promise was. and it's not going to be just called cloud, It didn't have to be, we didn't have to just run in each of the major regions. And so the biggest thing we were hearing was like, machine learning is the new, ML is the new SQL. is going to be automated away. you talked about it So how important is this to your customers, so we made a decision early on to say, Yeah, we made a decision to say, and you got it in. And remember earlier we were talking and I was telling you of having when you break down this monolithic type And the teams work really well together which is What do you guys do It's got to be the right fit with the technology, and how has it met what you thought you were going to get? I mean, I think the energy that you get Good to see you again. Hey, yeah, absolutely, That's a wrap for us here in Seattle. see you around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Daniel Berg | PERSON | 0.99+ |
Norman Hsieh | PERSON | 0.99+ |
Norman | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
two lines | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Dan | PERSON | 0.99+ |
Greylock Ventures | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Daniel | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Elastic | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
LogDNA | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
IMB | ORGANIZATION | 0.98+ |
Stu | PERSON | 0.98+ |
IKS | ORGANIZATION | 0.98+ |
single machines | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
IBM Cloud | ORGANIZATION | 0.98+ |
IMB Cloud | TITLE | 0.97+ |
one button | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
two | QUANTITY | 0.97+ |
each | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
today | DATE | 0.94+ |
CloudNativeCon North America 2018 | EVENT | 0.94+ |
single log line | QUANTITY | 0.93+ |
KubeCon 2018 | EVENT | 0.93+ |
thousands | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
GDPR | TITLE | 0.91+ |
about 50 people | QUANTITY | 0.91+ |
Container World | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
this year | DATE | 0.9+ |
two major startups | QUANTITY | 0.9+ |
three | QUANTITY | 0.89+ |
Edge | TITLE | 0.88+ |
DevOps | TITLE | 0.88+ |
EU | ORGANIZATION | 0.87+ |
about a year ago | DATE | 0.86+ |
a week | QUANTITY | 0.86+ |
Elasticsearch | TITLE | 0.85+ |
Christine Yen, Honeycomb io | DevNet Create 2018
>> Announcer: Live from the Computer History Museum in Mountain View, California. It's theCUBE, covering DevNet Create 2018. Brought to you by Cisco. >> Hey, welcome back, everyone. This is theCUBE, live here in Mountain View, California, heart of Silicon Valley for Cisco's DevNet Create. This is their Cloud developer event. It's not the main Cisco DevNet which is more of the Cisco developer, this is much more Cloud Native DevOps. I'm joined with my cohost, Lauren Cooney and our next guest is Christine Yen, who is co-founder and Chief Product Officer of Honeycomb.io. Welcome to theCUBE. >> Thank you. >> Great to have an entrepreneur and also Chief Product Officer because you blend in the entrepreneurial zeal, but also you got to build the product in the Cloud Native world. You guys done a few ventures before. First, take a minute and talk about what you guys do, what the company is built on, what's the mission? What's your vision? >> Absolutely, Honeycomb was built, we are an observability platform to help people find the unknown unknowns. Our whole thesis is that the world is getting more complicated. We have microservices and containers, and instead of having five application servers that we treated like pets in the past, we now have 500 containers running that are more like cattle and where any one of them might die at any given time. And we need our tools to be able to support us to figure out how and why. And when something happens, what happened and why, and how do we resolve it? We look around at the landscape and we feel like this dichotomy out there of, we have logging tools and we have metrics tools. And those really evolved from the fact that in 1995, we had to choose between grep or counters. And as technology evolved, those evolved to distribute grep or RDS. And then we have distribute grep with fancy UIs and well, fancy RDS with UIs. And Honeycomb, we were started a couple years ago. We really feel like what if you didn't have to choose? What if technology supported the power of having all the context there the way that you do with logs while still being able to provide instant analytics the way that you have with metrics? >> So the problem that you're solving is one, antiquated methodologies from old architectures and stacks if you will, to helping people save time, with the arcane tools. Is that the main premise? >> We want people to be able to debug their production systems. >> All right, so, beyond that now, the developer that you're targeting, can you take us through a day in the life of where you are helping them, vis a vis the old way? >> Absolutely, so I'll tell a story of when myself and my co-founder, Charity, were working together at PaaS. PaaS, for those who aren't familiar, used to be RD, a backend form of mobile apps. You can think of someone who just wants to build an iOS app, doesn't want to deal with data storage, user records, things like that. And PaaS started in 2011, got bought by Facebook in 2013, spun down very beginning of 2016. And in 2013, when the acquisition happened, we were supporting somewhere on the order of 60,000 different mobile apps. Each one of them could be totally different workload, totally different usage pattern, but any one of them might be experiencing problems. And again, in this old world, this pre-Honeycomb world, we had our top level metrics. We had latency, response, overall throughput, error rates, and we were very proud of them. We were very proud of these big dashboards on the wall that were green. And they were great, except when you had a customer write in being like, "Hey, PaaS is down." And we look at our dashboard we'd be like, "Nope, it's not down. "It must be network issues." >> John: That's on your end. >> Yeah, that's on your end. >> John: Not a good answer. >> Not a good answer, and especially not if that customer was Disney, right? When you're dealing with these high level metrics, and you're processing tens or hundreds of thousands of requests per second, when Disney comes in, they've got eight requests a second and they're seeing all of them fail. Even though those are really important, eight requests per second, you can't tease that out of your graphs. You can't figure out why they're failing, what's going on, how to fix it. You've got to dispatch an engineer to go add a bunch of if app ID equals Disney, track it down, figure out what's going on there. And it takes time. And when we got to Facebook, we were exposed to a type of tool that essentially inspired Honeycomb as it is today that let us capture all this data, capture a bunch of information about everything that was happening down to these eight requests per second. And when a customer complained, we could immediately isolate, oh, this one app, okay let's zoom in. For this one customer, this tiny customer, let's look at their throughput, error rates, latency. Oh, okay. Something looks funny there, let's break down by endpoint for this customer. And it's this iterative fast, highly granular investigation, that is where all of us are approaching today. With our systems getting more complicated you need to be able to isolate. Okay, I don't care about the 200s, I only care about the 500s, and within the 500s, then what's going on? What's going on with this server, with that set of containers? >> So this is basically an issue of data, unstructured data or have the ability to take this data in at the same time with your eye on the prize of instrumentation. And then having the ability to make that addressable and discoverable in real time, is that kind of? >> Yeah, we've been using the term observability to describe this feeling of, I need to be able to find unknown unknowns. And instrumentation is absolutely the tactic to observability of the strategy. It is how people will be able to get information out of their systems in a way that is relevant to their business. A common thing that we'll hear or people will ask, "Oh, can you ingest my nginx logs?" "Can you ingest my SQL logs?" Often, that's a great place to start, but really where are the problems in an application? Where are your problems in the system? Usually it's the places that are custom that the engineers wrote. And tools need to be able to support, providing information, providing graphs, providing analytics in a way that makes it easy for the folks who wrote the code to track down the problem and address them. >> It's a haystack of needles. >> Yeah, absolutely. >> They're all relevant but you don't know which needle you're going to need. >> Exactly. >> So, let me just get this. So I'm ducking out, just trying to understand 'cause this is super important because this is really the key to large scale Cloud ops, what we're talking about here. From a developer standpoint, and we just had a great guest on, talking about testing features and production which is really the important, people want to do that. And then, but for one person, but in production scale, huge problem, opportunity as well. So, if most people think of like, "Oh, I'll just ingest with Splunk," but that's a different, is that different? I mean, 'cause people think of Splunk and they think of Redshift and Kinesis on Amazon, they go, "Okay." Is that the solution? Are you guys different? Are you a tool? How do I understand you guys' context to those known solutions? >> First of all, explain the difference between ourselves and the Redshifts and big queries of the world, and then I'll talk about Splunk. We really view those tools as primarily things built for data scientists. They're in the big data realm, but they are very concerned with being 100% correct. They're concerned with fitting into big data tools and they often have an unfortunate delay in getting data in and making it acquirable. Honeycomb is 100% built for engineers. Engineers of people, the folks who are going to be on the hook for, "Hey, there's downtime, what's going on?" And in-- >> So once business benefits, more data warehouse like. >> Yeah. And what that means is that for Honeycomb, everything is real time. It's real time. We believe in recent data. If you're looking to get query data from a year ago we're not really the thing, but instead of waiting 20 minutes for a query over a huge volume of data, you wait 10 seconds, or it's 3:00 AM and you need to figure out what's happening right now, you can go from query to query, to query, to query, as you come up with hypotheses, validate them or invalidate them, and continue on your investigation path. So that's... >> That makes sense. >> Yeah. >> So data wrangling, doing queries, business intelligence, insights as a service, that's all that? >> Yeah. We almost, we played with and tossed the tagline BI for systems because we want that BI mentality of what's going on, let me investigate. But for the folks who need answers now, an approximate answer now is miles better than a perfect one-- >> And you can't keep large customers waiting, right? At the end of the day, you can't keep the large customers waiting. >> Well, it's also so complicated. The edge is very robust and diverse now. I mean, no-js is a lot of IO going on for instance. So let's just take an example. I had developer talking the other day with me about no-js. It's like, oh, someone's complaining but they're using Firefox. It's like, okay, different memory configuration. So the developer had to debug because the complaints were coming in. Everyone else was fine, but the one guy is complaining because he's on Firefox. Well, how many tabs does he have open? What's the memory look like? So like, this a weird thing, I mean, that's just a weird example, but that's just the kinds of diverse things that developers have to get on. And then where do they start? I mean. >> Absolutely. So, there's something we ran into or we saw our developers run into all the time at PaaS, right? These are mobile developers. They have to worry about not only which version of the app it is, they have to worry about which version of the app, using which version of RSDK on which version of the operating system, where any kind of strange combination of these could result in some terrible user experience. And these are things that don't really work well if you're relying on pre-aggregated 10 series system, like the evolution of the RDS, I mentioned. And for folks who are trying to address this, something like Splunk, these logging tools, frankly, a lot of these tools are built on storage engines that are intended for full text search. They're unstructured text, you're grepping over them, and then you're build indices and structure on top of that. >> There's some lag involved too in that. >> There's so much lag involved. And there's almost this negative feedback loop built in where if you want to add more data, if on each log line you want to start tracking browser user agent, you're going to incur not only extra storage costs, you're going to incur extra read time costs because you're reading that more data, even if you're don't even care about that on those queries. And you're probably incurring cost on the right time to maintain these indices. Honeycomb, we're a column store through and through. We do not care about your unstructured text logs, we really don't want them. We want you to structure your data-- >> John: Did you guys write your own column store or is that? >> We did write our own column store because ultimately there's nothing off the shelf that gave us the speed that we wanted. We wanted to be able to, Hey, sending us data blogs with 20, 50, 200 keys. But if you're running analysis and all you care about is a simple filter and account, you shouldn't have to pull in all this-- >> To become sort of like Ferrari, if you customize, it's really purpose built, is that what you guys did? >> That is. >> So talk about the dynamic, because now you're dealing with things like, I mean, I had a conversation with someone who's looking at say blockchain, where there's some costs involved, obviously writing to the blockchain. And this is not like a crypto thing it's more of a supply chain thing. They want visibility into latency and things of that nature. Does this sounds like you would fit there as a potential use case? Is that something that you guys thought of at all? >> It could absolutely be. I'm actually not super familiar with the blockchain or blockchain based applications but ultimately Honeycomb is intended for you to be able to answer questions about your system in a way that tends to stymie existing tools. So we see lots of people come to us from strange use cases who just want to be able to instrument, "Hey I have this custom logic. "I want to be able to look at what it's doing." And when a customer complains and my graphs are fine or when my graphs are complaining, being able to go in and figure out why. >> Take a minute to talk about the company you founded. How many employees funding, if you can talk about it. And use case customers you have now. And how do you guys engage? The service, is it, do I download code? Is it SaaS? I mean, you got all this great tech. What's the value proposition? >> I think I'll answer this-- >> John: Company first. >> All right. >> John: Status of the company. >> Sure. Honeycomb is about 25 people, 30 people. We raised a series A in January. We are about two and a half years old and we are very much SaaS of the future. We're very opinionated about a number of things and how we want customers to interact with us. So, we are SaaS only. We do offer a secure proxy option for folks who have PII concerns. We only take structured data. So, at our API, you can use whatever you want to slurp data from your system. But at our API, we want JSON. We do offer a wide variety of integrations, connectors, SDKs, to help you structure that data. But ultimately-- >> Do you provide SDKs to your customers? >> We do. So that if they want to instrument their application, we just have the niceties around like batching and doing things asynchronously so it doesn't block their application. But ultimately, so we try to meet folks where they're at, but it's 2016, it was 2017, 2018-- >> You have a hardened API, API pretty much defines your service from an inbound standpoint. Prices, cost, how does someone engage with you guys? When does someone know to engage? Where's the smoke signals? When is the house on fire? Is it like people are standing around? What's the problem? When does someone know to call you guys up at? >> People know to call us when they're having production problems that they can't solve. When it takes them way too long to go from there's an alert that went off or a customer complaint, to, "Oh, I found the problem, I can address it." We price based on storage. So we are a bunch of engineers, we try to keep the business side as simple as possible for better, for worse. And so, the more data you send us, the more it'll cost. If you want a lot of data, but stored for a short period of time, that will cost less than a lot of data stored for a long period of time. One of the things that we, another one of the approaches that is possibly more common in the big data world and less in the monitoring world is we talk a lot about sampling. Sampling as a way to control those costs. Say you are, Facebook, again, I'll return to that example. Facebook knew that in this world where lots and lots of things can go wrong at any point in time, you need to be able to store the actual context of a given event happening. Some unit of work, you want to keep track of all the pieces of metadata that make that piece of work unique. But at Facebook scale, you can't store every single one of them. So, all right, you start to develop these heuristics. What things are more interesting than others? Errors are probably more interesting than 200 okays. Okay. So we'll keep track of most errors, we'll store 1% of successful requests. Okay. Well, within that, what about errors? Okay. Well, things that time out are maybe more interesting than things that are permissioning errors. And you start to develop this sampling scheme that essentially maps to the interesting ness of the traffic that's flowing through your system. To throw out some numbers, I think-- >> Machine learning is perfect for that too. They can then use the sampling. >> Yeah. There's definitely some learning that can happen to determine what things should be dropped on the ground, what requests are perfectly representative of a large swath of things. And Instagram, used a tool like this inside Facebook. They stored something like 1/10 of a percent or a 1/100 of a percent of their requests. 'Cause simply, that was enough to give them a sketch of what representative traffic, what's going wrong, or what's weird that, and is worth digging into. >> Final question. What's your priorities for the product roadmap? What are you guys focused on now? Get some fresh funding, that's great. So expand the team, hiring probably. Like product, what's the focus on the product? >> Focus on the product is making this mindset of observability accessible to software engineers. Right, we're entering this world where more and more, it's the software engineers deploying their code, pushing things out in containers. And they're going to need to also develop this sense of, "Okay, well, how do I make sure "something's working in production? "How do I make sure something keeps working? "And how do I think about correctness "in this world where it's not just my component, "it's my component talking to these other folks' pieces?" We believe really strongly that the era of this single person in a room keeping everything up, is outdated. It's teams now, it's on call rotations. It's handing off the baton and sharing knowledge. One of the things that we're really trying to build into the product, that we're hoping that this is the year that we can really deliver on this, is this feeling of, I might not be the best debugger on the team or I might not be the best person, best constructor of graphs on the team, and John, you might be. But how can a tool help me as a new person on a team, learn from what you've done? How can a tool help me be like, Oh man, last week when John was on call, he ran into something around my SQL also. History doesn't repeat, but it rhymes. So how can I learn from the sequence of those things-- >> John: Something an expert system. >> Yeah. Like how can we help build experts? How can we raise entire teams to the level of the best debugger? >> And that's the beautiful thing with metadata, metadata is a wonderful thing. 'Cause Jeff Jonas said on the, he was a Cube alumni, entrepreneur, famous data entrepreneur, observation space is super critical for understanding how to make AI work. And that's to your point, having observation data, super important. And of course our observation space is all things. Here at DevNet Create, Christine, thanks for coming on theCUBE, spending the time. >> Thank you. >> Fascinating story, great new venture. Congratulations. >> Christine: Thank you. >> And tackling the world of making developers more productive in real time in production. Really making an impact to coders and sharing and learning. Here in theCUBE, we're doing our share, live coverage here in Mountain View, DevNet Create. We'll be back with more after this short break. (gentle music)
SUMMARY :
Brought to you by Cisco. It's not the main Cisco DevNet in the Cloud Native world. the way that you have with metrics? Is that the main premise? to debug their production systems. on the wall that were green. I only care about the 500s, And then having the ability to make that that the engineers wrote. but you don't know which Is that the solution? and big queries of the world, So once business benefits, or it's 3:00 AM and you need to figure out But for the folks who need answers now, And you can't keep large So the developer had to debug all the time at PaaS, right? on the right time to and all you care about is a Is that something that you is intended for you about the company you founded. and how we want customers So that if they want to call you guys up at? And so, the more data you perfect for that too. that can happen to determine what things focus on the product? that the era of this to the level of the best debugger? And that's the beautiful And tackling the world
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lauren Cooney | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jeff Jonas | PERSON | 0.99+ |
Christine | PERSON | 0.99+ |
January | DATE | 0.99+ |
2013 | DATE | 0.99+ |
tens | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
1995 | DATE | 0.99+ |
Christine Yen | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Disney | ORGANIZATION | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
Firefox | TITLE | 0.99+ |
1% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
500 containers | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
3:00 AM | DATE | 0.99+ |
30 people | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
50 | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1/100 | QUANTITY | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Honeycomb.io | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Honeycomb | ORGANIZATION | 0.99+ |
Mountain View | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
60,000 different mobile apps | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
200 keys | QUANTITY | 0.98+ |
2016 | DATE | 0.98+ |
2018 | DATE | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
DevNet Create | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
five application servers | QUANTITY | 0.97+ |
one customer | QUANTITY | 0.97+ |
a year ago | DATE | 0.96+ |
ORGANIZATION | 0.96+ | |
one person | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
about 25 people | QUANTITY | 0.94+ |
JSON | TITLE | 0.94+ |
about two and a half years old | QUANTITY | 0.94+ |
series A | OTHER | 0.93+ |
Each one | QUANTITY | 0.93+ |
one guy | QUANTITY | 0.91+ |
eight requests per second | QUANTITY | 0.9+ |
eight requests a second | QUANTITY | 0.89+ |
less than a lot of data | QUANTITY | 0.89+ |
1/10 of a percent | QUANTITY | 0.89+ |
each log line | QUANTITY | 0.88+ |
one app | QUANTITY | 0.87+ |
Splunk | ORGANIZATION | 0.86+ |
couple years ago | DATE | 0.85+ |
a percent | QUANTITY | 0.85+ |