Image Title

Search Results for RAS:

Scott Johnston, Docker | KubeCon + CloudNativeCon NA 2022


 

(upbeat music) >> Welcome back, everyone. Live coverage here at KubeCon + CloudNativeCon here in Detroit, Michigan. I'm John Furrier, your host of theCUBE for special one-on-one conversation with Scott Johnston, who's the CEO of Docker, CUBE alumni, been around the industry, multiple cycles of innovation, leading one of the most important companies in today's industry inflection point as Docker what they've done since they're, I would say restart from the old Docker to the new Docker, now modern, and the center of the conversation with containers driving the growth of Kubernetes. Scott, great to see you. Thanks for coming on theCUBE. >> John, thanks for the invite. Glad to be here. >> You guys have had great success this year with extensions. Docker as a business model's grown. Congratulations, you guys are monetizing well. Pushing up over 50 million. >> Thank you. >> I hear over pushing a hundred million maybe. What the year to the ground will tell me, but it's good sign. Plus you've got the community and nurturing of the ecosystem continuing to power away and open source is not stopping. It's thundering away growth. Younger generation coming in. >> That's right. >> Developer tool chain that you have has become consistent. Almost de facto standard. Others are coming in the market. A lot of competition emerging. You got a lot going on right now. What's going on? >> Well, I know it's fantastic time in our industry. Like all companies are becoming software companies. That means they need to build new applications. That means they need developers to be productive and to be safely productive. And we, and this wonderful CNCF ecosystem are right in the middle of that trend, so it's fantastic. >> So you have millions of developers using Docker. >> Tens of millions. >> Tens of millions of developed Docker and as the market's changing, I was commenting before we came on camera, and I'd love to get your reaction, comment on it. You guys represent the modernization of containers, open source. You haven't really changed how open source works, but you've kind of modernized it. You're starting to see developers at the front lines, more and more power going to developers. >> Scott: That's right. >> They want self-service. They vote with their code. >> That's right. >> They vote with their actions. >> Scott: That's right. >> And if you take digital transformation to its conclusion, it's not IT serves the business or it's a department, the company is IT. >> That's right. >> The company is the application, which means developers are running everything. >> Yes, yes. I mean, one of the jokes, not jokes in the valley is that Tesla is in a car company. Tesla is a computer company that happens to have wheels on the computer. And I think we can smile at that, but there's so many businesses, particularly during COVID, that realize that. What happened during COCID? If you're going to the movies, nope, you're now going to Netflix. If you're going to the gym, now you're doing Peloton. So this realization that like I have to have a digital game, not just on the side, but it has to be the forefront of my business and drive my business. That realization is now any industry, any company across the board. >> We've been reporting aggressively for past three years now. Even now we're calling some things supercloud. If companies, if they don't realize that IT is not a department, they will probably be out of business. >> That's a hundred percent. >> It's going to transform into full on invisible infrastructure. Infrastructure as code, whatever you want to call that going, configuration, operations, developers will set the pace. This has a lot to do with some of your success. You're at the beginning of it. This is just the beginning. What can you talk about that in your mind is contributing to the success of Docker? I know you're going to say team, everything, I get that, but like what specifically in the industry is driving Docker's success right now? >> Well, it did. We did have a fantastic team. We do have a fantastic team and that is one of the reasons, primary reasons our success. But what is also happening, John, is because there's a demand for applications, I'll just throw it out there. 750 million new applications are coming in the market in the next two years. That is more applications that have been developed in the entire 40 years history of IT. So just think about the productivity demands that are coming at developers. And then you also see the need to do so safely, meaning ship quickly, but ship safely. And yet 90 some percent of every application consists of open source components that are now on attack surface for criminals. And so typically our industry has had to say one or the other, okay, you can ship quickly but not safely, or you can ship safely, but it's not going to go fast. And one of the reasons I think Docker is where it is today is that we're able to offer both. We're able to unlock that you can ship quickly, safely using Docker, using the Docker toolchain, using integrations we have with all the wonderful partners here at CNCF that is unique. And that's a big reason why we're seeing the success we're seeing. >> And you're probably pleased with extensions this year. >> Yes. >> The performance of extensions that you launched at DockerCon '22. >> Yes. Well, extensions are part of that story and that developers have multiple tools. They want choice, developers like choice to be productive and Docker is part of that, but it's not the only solution. And so Docker extensions allow the monitoring providers and the observability and if you want a separate Kubernetes stack, like all of that flexibility, extensions allows. And again, offers the power and the innovation of this ecosystem to be used in a Docker development and context. >> Well, I want to get into some of the details of some of your products and how they're evolving. But first I want to get your thoughts on the trend line here that we reported at the opening segment. The hot story is WebAssembly, the Wasm, which really got a lot of traction or interest. People enthous about it. >> Interest, yeah. >> Lot of enthusiasm. Confidence we'll see how that evolves, but a lot of enthusiasm for sure. I've never seen something this hyped up since Envoy, in my opinion. So a lot of interest from developers. What is Wasm or WebAssembly is actually what it is, but Wasm is the codeword or nickname. What is Wasm? >> So in brief, WebAssembly is a new application type, full stop. And it's just enough of the components that you need and it's just a binary format that is very, very secure. And so it's lightweight, it's fast and secure. And so it opens up a lot of interesting use cases for developer, particularly on the edge. Another use case for Wasm is in the browser. Again, lightweight, fast, secure also. >> John: Sounds like an app server to me. >> And so we think it's a very, very interesting trend. And you ask, Okay, what's Docker's role in that? Well, Docker has been around eight years now, eight plus years, tens of millions developers using it. They've already made investments in skills, talent, automation, toolchains, pipelines. And Docker started with Linux containers as we know, then brought that same experience to Windows containers, then brought it to serverless functions. About 25% of Amazon Lambdas are OCI image containers. And so we were seeing that trend. We were also seeing the community actually without any prompting from us, start to fork and play with Docker and apply it to Wasm. And we're like, Huh, that's interesting. What if we helped get behind that trend, such that you changed just one line of a Docker file, now you're able to produce Wasm objects instead of Linux containers and just bring that same easy to use. >> So that's not a competition to Docker's? >> Not a competition at all. In fact, very complimentary. We showed off on Monday at the Wasm day, how in the same Docker compose application, multi-service application. One service is delivered via Linux container, Another service is delivered via Wasm. >> And Wasm is what? Multiple languages? 'Cause what is it? >> Yes. So the binary can be compiled from multiple languages. So RAS, JavaScript, on and on and on. At the end of the day, it's a smaller binary that provides a function, typically a single function that you can stand up and deploy on an edge. You can stand up and deploy on the server side or stand up and deploy on the browser. >> So from a container standpoint, from your customer standpoint, what a Linux container is is a similar thing to what a Wasm container is. >> They could implement the same function. That's right. Now a Linux container can have more capabilities that a function might not have, but that's. >> John: From a workflow standpoint. >> That's right. And that's more of a use case by use case standpoint. What we serve is we serve developers and we started out serving developers with Linux containers, then Windows containers, then Lambdas, now Wasm. Whatever other use case, what other application type comes along, we want to be there to serve developers. >> So one of the things I want to get your thoughts on, because this has come up in a couple CUBE interviews before, and we were talking before we came on camera, is developers want ease of use and simplicity. They don't want more steps to do things. They don't want things harder. >> That's right. So the classic innovation is reduce the time it takes to do something, reduce the steps, make it easier. That's a formula of success. >> Scott: That's right. >> When you start adding more toolchains into the mix, you get tool sprawl. So that's not really, that's antithesis to developer. So the argument is, okay, do I have to use a new tool chain for Wasm? Is that a fact or no? >> That's exactly right. That was what we were seeing and we thought, well, how can Docker help with this situation? And Docker can help by bringing the same existing toolchain that developers are already familiar with. The same automation, the same pipelines. And just by changing a line of Docker file, changing a single line of composed file, now they get the power of Wasm unlocked in the very same tools they were using before. >> So your position is, hey, don't adopt some toolchain for Wasm. You can just do it in line with Docker. >> No need to, no need to. We're providing it right there out of the box, ready for them. >> That's raise and extend, as they would say, build Microsoft strategy there. That's nice. Okay, so let's get back into like the secure trusted 'cause that was another theme at DockerCon. We covered that deeply. Software supply chain, I was commenting on my intro with Savannah and Lisa that at some point open source means so plentiful. You might not have to write code. You got to glue together. So as code proliferates, the question what's in there? >> That's right. This is what they call the software supply chain. You've been all over this. Where are we with this? Is it harder now? Is it easier? Was there progress? Take us through what's the state of the art. I think we're early on this one, John, in the industry because I think the realization of how much open source is inside a given app is just now hitting consciousness. And so the data we have is that for any given application, anywhere from 75 to 85% is actually not unique to the developer or the organization. It's open source components that they have put together. And it's really down to that last 15, 25%, which is their own unique code that they're adding on top of all this open source code. So right there, it's like, aha, that's a pretty interesting profile or distribution of value, which means those open source components, where are they finding them? How are they integrating them? How do they know those open source components are going to be supported and trusted and secured? And that's the challenge for us as an industry right now is to make it just obvious where to get the components, how safe they are, who's standing behind them, and how easy it is to assemble them into a working application. >> All right. So the question that I had specifically on security 'cause this had come up before. All good on the trusted and I think that message is evergreen. It's a north star. That's a north star for you. How are you making images more secure and how are you enabling organizations to identify security issues in containers? Can you share your strategy and thoughts on that particular point? >> Yes. So there's a range of things in the secure software supply chain and it starts with, are you starting with trusted open source components that you know have support, that you know are secured? So in Docker Hub today, we have 14 million applications, but a subset of that, we've worked with the upstream providers to basically designate as trusted open source content. So this is the Docker official images, Docker verified publisher images, Docker sponsored open source. And those different categories have levels of certification assurance that they must go through. Generate an SBOM, so you know what's inside that container. It has to be scanned by a scanning tool and those scanning results have to be made available. >> John: Are you guys scanning that? >> So we provide a scanner, they can use another scanner as long as they publish the results of that scan. And then the whole thing is signed. >> Are you publishing the results on your side too? >> Yeah, we published our results through an open database that's accessible to all. >> Free. >> Free, a hundred percent free. You come in and you can see every image on hub. >> So I'm a user, for free I can see security vulnerabilities that are out there that have been identified. >> By version, by layer, all the way through. And you can see tracking all the way back to the package that's upstream. So you know how to remediate and we provide recommendations on how to remediate that with the latest version. >> John: And you don't charge for that. >> We don't charge for that. We do not charge for that. And so that's the trusted upstream. >> So organization can look at the scan, they can look at the scan data and hopefully, what happens if they're not scanned? >> So we provide scanning tools both for the local environments for Docker Desktop, as well as for hub. So if you want to do your own scan, so for example, when you're that developer adding the 15, 25%, you got to scan your stuff as well. Not just leave it up to the already scanned components. And so we provide tools there. We also provide tools to track the packages that that developer might be including in their custom code, all the way back upstream to whatever MPM repo or what have you that they picked up. And then if there's a CVE 30 days later, we also track that as well. We say, Hey, that package was was safe 29 days ago, but today CVE just came out, better upgrade to the latest version and get that out there. So basically if you get down to it, it's like start with trusted components and then have observability not just on the moment. >> And scan all the time. >> Scan all the time and scanning gives you that observability and importantly not just at that moment, but through the lifecycle of the application, through lifecycle of the artifact. So end-to-end 24/7 observability of the state of your supply chain. That's what's key, John. >> That's the best practice. >> That's the key. That's the key. >> Awesome, I agree. That's great. Well, I'm glad we've dug into that's super important. Obviously organizations can get that scanning that's exceed the vulnerabilities, that can take action. That's going to be a big focus here for you, security. It's not going to stop, is it? >> It's never going to stop because criminals are incentive to keep attacking. And so it's the gift that keeps on giving, if you will. >> Okay, so let's get into some of the products. Docker Desktop seems to be doing well. Docker Hub has always been a staple of it. And how's that going? >> Yeah, Docker Hub has 18 million monthly actives hitting it and that's growing by double digits year over year. And what they're finding, going back to our previous thread, John, is that they're coming there for the trusted content. In fact, those three categories that I referenced earlier are about 2000 applications of the 14 million. And yet they represent 56% of the 15 billion downloads a month from Docker Hub. Meaning developers are identifying that, hey, I want trusted source. We raise those in the search results and we have a visual cue. And so that's the big driver of hub's growth right now, is I want trusted content, where do I go? I go to Hub, download that trusted open source and I'm ready to go. >> I have been seeing some chatter on the internet and some people's sharing that they're looking at other places, besides hub, to do some things. What's your message to folks out there around Docker Hub? Why Docker Hub and desktop together? 'Cause you mentioned the toolchain before, but those two areas, I know they've been around for a while, you continue to work on them. What's the message to the folks out there about stay with the hub? >> Sure. I mean the beauty of our ecosystem is that it's interoperable. The standards for build, share and run, we're all using them here at CNCF. So yes, there's other registries. What we would say is we have the 18 million monthly active that are pulling, we have the worldwide distribution that is 24/7 high, five nines reliability, and frankly, we're there to provide choice. And so yes, we have have our trusted content, but for example, the Tanzu apps, they also distribute through us. Red Hat applications also distribute through us because we have the reach and the distribution and offer developers choice of Dockers content, choice of Red Hats content, choice of VMware's, choice of Bitnami, so on so forth. So come to the hub for the distribution to reach and that the requirements we have for security that we put in place for our publishers, give users and publishers an extra degree of assurance. >> So the Docker Hub is an important part of the system? >> Scott: Yes, very much so. >> And desktop, what's new with desktop? >> So desktop of course is the other end of the spectrum. So if trusted components start up on Docker Hub, developers are pulling them down to the desktop to start assembling their application. And so the desktop gives that developer all the tools he or she needs to build that modern application. So you can have your build tooling, your debug tooling, your IDE sitting alongside there, your Docker run, your Docker compose up. And so the loop that we see happening is the dev will have a database they download from hub, a front-end, they'll add their code to it and they'll just rapidly iterate. They'll make a change, stand it up, do a unit test, and when they're satisfied do a git commit, off it goes into production. >> And your goal obviously is to have developers stay with Docker for their toolchain, their experience, make it their home base. >> And their trusted content. That's right. And the trusted content and the extensions are part of that. 'Cause the extensions provide complimentary tooling for that local experience. >> You guys have done an amazing job. I want to give you personal props. I've been following Docker from the beginning when they had the pivot, they sold the enterprise to Mirantis, went back to the roots, modernized, riding the wave. You guys are having a good time. I got to ask the question 'cause people always want to know 'cause open source is about transparency. How you guys making your money? Business is good. How's that work and what was the lucky, what was the not lucky strike, but what was the aha moment? What was the trigger that just made you just kick in this new monetization growth wave? >> So the monetization is per seat, per developer seat. And that changed in November 2019. We were pricing on the server side before, and as you said, we sold that off. And what changed is some of the trends we were talking about that the realization by all organizations that they had to become software companies. And Docker provided the productivity in an engineered desktop product and the trusted content, it provided the productivity safely to developers. And frankly then we priced it at a rate that is very reasonable from an economic standpoint. If you look at developer productivity, developers are paid anywhere from 150 to 300 to 400, 500,000 even higher. >> But when you're paying your developers that much, then productivity is a premium. And what we were asking for from companies from a licensing standpoint was really a modest relative to the making those developers product. >> It's not like Oracle. I mean talk about extracting the value out of the customer. But your point is your positioning is always stay quarter of the open source, but for companies that adopt the structural change to be developer first, a software company, there's a premium to pay because you devalue there. >> And need the tooling to roll it out at scales. So the companies are paying us. They're rolling it out to tens of thousand developers, John. So they need management, they need visibility, they need guardrails that are all around the desktop. So, but just to put a stat on it, so to your point about open source and the freemium wheel working, of our 13 million Docker accounts, 12 are free, about a million are paid for accounts. And that's by design because the open source. >> And you're not gouging developers per se, it's just, not gouging anyone, but you're not taking money out of their hands. It's the company. >> If the company is paying for their productivity so that they can build safely. >> More goodness more for the developer. >> That's right. That's right. >> Gouging would be more like the Oracle strategy. Don't comment. You don't need to comment. I keep saying that, but it's not like you're taxing. It's not a heavy. >> No, $5 a month, $9 a month, $24 a month depending on level. >> But I think the big aha to me and in my opinion is that you nailed the structural change culturally for a company. If they adopt the software ecosystem approach for transforming their business, they got to pay for it. So like a workflow, it's a developer. >> It's another tool. I mean, do they pay for their spreadsheet software? Do they pay for their back office ERP software? They do >> That's my point. >> to make those people popular or sorry, make those people successful, those employees successful. This is a developer tool to make developer successful. >> It's a great, great business model. Congratulations. What's next for you guys? What are you looking for? You just had your community events, you got DockerCon coming up next year. What's on the horizon for you? Put a plugin for the company. What are you looking for? Hiring? >> Yeah, so we're growing like gangbusters. We grew from 60 with the reset. We're now above 300 and we're continuing to grow despite this economic climate. Like our customers are very much investing in software capabilities. So that means they're investing in Docker. So we're looking for roles across the board, software engineers, product managers, designers, marketing, sales, customer success. So if you're interested, please reach out. The next year is going to be really interesting because we're bringing to market products that are doubling down on these areas, doubling down a developer productivity, doubling down on safety to make it even more just automatic that developers just build so they don't have to think about it. They don't need a new tool just to be safer. We hinted a bit about automating SBOM creation. You can see more of that pull through. And in particular, developers want to make the right decision. Everyone comes to work wanting to make the right decision. But what they often lack is context. They often lack like, well, is this bit of code safe or not? Or is this package that I just downloaded over here safe or not? And so you're going to see us roll out additional capabilities that give them very explicit contextual guidance of like, should you use this or not? Or here's a better version over here, a safer version over there. So stay tuned for some exciting stuff. >> It's going to be a massive developer growth wave coming even bigger we've ever seen. Final questions just while I got you here. Where do you see WebAssembly, Wasm going? If you had to throw a dart at the board out a couple years, what does it turn into? >> Yeah, so I think it's super exciting. Super exciting, John. And there's three use cases today. There's browser, there's edge, and there's service side in the data center of the cloud. We see the edge taking off in the next couple years. It's just such a straight line through from what they're doing today and the value that standing up a single service on the edge go. The service side needs some work on the Wasm runtime. The Wasm runtime is not multi-threaded today. And so there's some deep, deep technical work that's going on. The community's doing a fantastic job, but that'll take a while to play through. Browsers also making good progress. There's a component model that Wasm's working on that'll really ignite the industry. That is going to take another couple years as well. So I'd say let's start with the edge use case. Let's get everyone excited about that value proposition. And these other two use cases will come along. >> It'll all work itself out in the wash as open source always does. Scott Johnston, the Chief Executive Officer at Docker. Took over at the reset, kicking butt and taking names. Congratulations. You guys are doing great. Continue to power the developer movement. Thanks for coming on. >> John, thanks so much. Pleasure to be here. >> We're bringing you all the action here. Extracting the signal from the noise. I'm John Furrier, day one of three days of wall-to-wall live coverages. We'll be back for our next guest after this short break. (gentle music)

Published Date : Oct 26 2022

SUMMARY :

and the center of the John, thanks for the invite. Congratulations, you and nurturing of the ecosystem Others are coming in the market. are right in the middle of So you have millions of and as the market's changing, They vote with their code. it's not IT serves the The company is the application, not just on the side, that IT is not a department, This is just the beginning. and that is one of the reasons, And you're probably pleased that you launched at DockerCon '22. And again, offers the on the trend line here that we reported but Wasm is the codeword or nickname. And it's just enough of the and just bring that same easy to use. how in the same Docker deploy on the server side is a similar thing to They could implement the same function. and we started out serving So one of the things I So the classic innovation So the argument is, okay, The same automation, the same pipelines. So your position is, hey, don't adopt We're providing it right into like the secure trusted And so the data we have is So the question that I had in the secure software supply chain the results of that scan. that's accessible to all. You come in and you can that are out there that all the way through. And so that's the trusted upstream. not just on the moment. of the state of your supply chain. That's the key. that's exceed the vulnerabilities, And so it's the gift that into some of the products. And so that's the big driver What's the message to the folks out there and that the requirements And so the loop that we is to have developers And the trusted content and the Docker from the beginning And Docker provided the productivity relative to the making is always stay quarter of the open source, And need the tooling It's the company. If the company is paying That's right. like the Oracle strategy. No, $5 a month, $9 a month, $24 a month is that you nailed the structural change I mean, do they pay for to make those people popular What's on the horizon for you? so they don't have to think about it. the board out a couple years, and the value that standing up Took over at the reset, Pleasure to be here. Extracting the signal from the noise.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

JohnPERSON

0.99+

November 2019DATE

0.99+

56%QUANTITY

0.99+

John FurrierPERSON

0.99+

LisaPERSON

0.99+

SavannahPERSON

0.99+

Scott JohnstonPERSON

0.99+

13 millionQUANTITY

0.99+

18 millionQUANTITY

0.99+

MondayDATE

0.99+

90QUANTITY

0.99+

oneQUANTITY

0.99+

TeslaORGANIZATION

0.99+

14 millionQUANTITY

0.99+

DockerORGANIZATION

0.99+

eight plus yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

150QUANTITY

0.99+

next yearDATE

0.99+

OracleORGANIZATION

0.99+

Detroit, MichiganLOCATION

0.99+

Tens of millionsQUANTITY

0.99+

WindowsTITLE

0.99+

MicrosoftORGANIZATION

0.99+

LinuxTITLE

0.99+

Red HatTITLE

0.99+

millionsQUANTITY

0.99+

One serviceQUANTITY

0.99+

29 days agoDATE

0.99+

12QUANTITY

0.99+

JavaScriptTITLE

0.99+

KubeConEVENT

0.99+

14 million applicationsQUANTITY

0.99+

CNCFORGANIZATION

0.99+

CUBEORGANIZATION

0.98+

60QUANTITY

0.98+

Docker HubORGANIZATION

0.98+

bothQUANTITY

0.98+

three use casesQUANTITY

0.98+

todayDATE

0.98+

30 days laterDATE

0.98+

three categoriesQUANTITY

0.98+

over 50 millionQUANTITY

0.98+

DockerTITLE

0.98+

this yearDATE

0.98+

two areasQUANTITY

0.98+

AmazonORGANIZATION

0.98+

CloudNativeConEVENT

0.98+

85%QUANTITY

0.98+

$9 a monthQUANTITY

0.98+

Docker HubTITLE

0.98+

400, 500,000QUANTITY

0.98+

300QUANTITY

0.98+

75QUANTITY

0.97+

about 2000 applicationsQUANTITY

0.97+

one lineQUANTITY

0.97+

$5 a monthQUANTITY

0.97+

single functionQUANTITY

0.97+

NetflixORGANIZATION

0.97+

Supercharge Your Business with Speed Rob Bearden - Joe Ansaldi | Cloudera 2021


 

>> Okay. We want to pick up on a couple of themes that Mick discussed, you know, supercharging your business with AI, for example, and this notion of getting hybrid right. So right now we're going to turn the program over to Rob Bearden, the CEO of Cloudera and Manuvir Das who's the head of enterprise computing at NVIDIA. And before I hand it off to Rob, I just want to say for those of you who follow me at the Cube, we've extensively covered the transformation of the semiconductor industry. We are entering an entirely new era of computing in the enterprise and it's being driven by the emergence of data intensive applications and workloads. No longer will conventional methods of processing data suffice to handle this work. Rather, we need new thinking around architectures and ecosystems. And one of the keys to success in this new era is collaboration between software companies like Cloudera and semiconductor designers like NVIDIA. So let's learn more about this collaboration and what it means to your data business. Rob, take it away. >> Thanks Mick and Dave. That was a great conversation on how speed and agility is everything in a hyper competitive hybrid world. You touched on AI as essential to a data first strategy in accelerating the path to value and hybrid environments. And I want to drill down on this aspect. Today, every business is facing accelerating change. Everything from face-to-face meetings to buying groceries has gone digital. As a result, businesses are generating more data than ever. There are more digital transactions to track and monitor now. Every engagement with coworkers, customers and partners is virtual. From website metrics to customer service records and even onsite sensors. Enterprises are accumulating tremendous amounts of data and unlocking insights from it is key to our enterprises success. And with data flooding every enterprise, what should the businesses do? At Cloudera, we believe this onslaught of data offers an opportunity to make better business decisions faster and we want to make that easier for everyone, whether it's fraud detection, demand forecasting, preventative maintenance, or customer churn. Whether the goal is to save money or produce income, every day that companies don't gain deep insight from their data is money they've lost. And the reason we're talking about speed and why speed is everything in a hybrid world and in a hyper competitive climate, is that the faster we get insights from all of our data, the faster we grow and the more competitive we are. So those faster insights are also combined with the scalability and cost benefit that cloud provides. And with security and edge to AI data intimacy, that's why the partnership between Cloudera and NVIDIA together means so much. And it starts with a shared vision, making data-driven decision-making a reality for every business. And our customers will now be able to leverage virtually unlimited quantities and varieties of data to power an order of magnitude faster decision-making. And together we turbo charged the enterprise data cloud to enable our customers to work faster and better, and to make integration of AI approaches a reality for companies of all sizes in the cloud. We're joined today by NVIDIA's Manduvir Das, and to talk more about how our technologies will deliver the speed companies need for innovation in our hyper competitive environment. Okay, Manuvir, thank you for joining us. Over to you now. >> Thank you Rob, for having me. It's a pleasure to be here on behalf of NVIDIA. We're so excited about this partnership with Cloudera. You know, when, when NVIDIA started many years ago, we started as a chip company focused on graphics. But as you know, over the last decade, we've really become a full stack, accelerated computing company where we've been using the power of GPU hardware and software to accelerate a variety of workloads, AI being a prime example. And when we think about Cloudera, and your company, your great company, there's three things we see Rob. The first one is that for the companies that were already transforming themselves by the use of data, Cloudera has been a trusted partner for them. The second thing we've seen is that when it comes to using your data, you want to use it in a variety of ways with a powerful platform, which of course you have built over time. And finally, as we've heard already, you believe in the power of hybrid, that data exists in different places and the compute needs to follow the data. Now, if you think about NVIDIA's mission going forward to democratize accelerated computing for all companies, our mission actually aligns very well with exactly those three things. Firstly, you know, we've really worked with a variety of companies to date who have been the early adopters using the power acceleration by changing their technology and their stacks. But more and more we see the opportunity of meeting customers where they are with tools that they're familiar with, with partners that they trust. And of course, Cloudera being a great example of that. The second part of NVIDIA's mission is we focused a lot in the beginning on deep learning where the power of GPU is really shown through. But as we've gone forward, we found that GPU's can accelerate a variety of different workloads from machine learning to inference. And so again, the power of your platform is very appealing. And finally, we know that AI is all about data, more and more data. We believe very strongly in the idea that customers put their data, where they need to put it. And the compute, the AI compute, the machine learning compute, needs to meet the customer where their data is. And so that matches really well with your philosophy, right? And, and Rob, that's why we were so excited to do this partnership with you. It's come to fruition. We have a great combined stack now for the customer and we already see people using it. I think the IRS is a fantastic example where, literally, they took the workflow they had, they took the servers they had, they added GPUs into those servers. They did not change anything. And they got an eight times performance improvement for their fraud detection workflows, right? And that's the kind of success we're looking forward to with all customers. So the team has actually put together a great video to show us what the IRS is doing with this technology. Let's take a look. >> How you doing? My name's Joe Ansaldi. I'm the branch chief of the technical branch in RAS. It's actually the research division, research and statistical division of the IRS. Basically, the mission that RAS has is we do statistical and research on all things related to taxes, compliance issues, fraud issues, you know, anything that you can think of basically, we do research on that. We're running into issues now that we have a lot of ideas to actually do data mining on our big troves of data, but we don't necessarily have the infrastructure or horsepower to do it. So our biggest challenge is definitely the, the infrastructure to support all the ideas that the subject matter experts are coming up with in terms of all the algorithms they would like to create. And the diving deeper within the algorithm space, the actual training of those algorithms, the number of parameters each of those algorithms have. So that's, that's really been our challenge now. The expectation was that with NVIDIA and Cloudera's help and with the cluster, we actually build out to test this on the actual fraud detection algorithm. Our expectation was we were definitely going to see some speed up in computational processing times. And just to give you context, the size of the data set that we were, the SME was actually working her algorithm against was around four terabytes. If I recall correctly, we had a 22 to 48 times speed up after we started tweaking the original algorithm. My expectations, quite honestly, in that sphere, in terms of the timeframe to get results, was it that you guys actually exceeded them. It was really, really quick. The definite now term, short term, what's next is going to be the subject matter expert is actually going to take our algorithm run with that. So that's definitely the now term thing we want to do. Going down, go looking forward, maybe out a couple of months, we're also looking at procuring some A-100 cards to actually test those out. As you guys can guess, our datasets are just getting bigger and bigger and bigger, and it demands to actually do something when we get more value added out of those data sets is just putting more and more demands on our infrastructure. So, you know, with the pilot, now we have an idea with the infrastructure, the infrastructure we need going forward and then also just our in terms of thinking of the algorithms and how we can approach these problems to actually code out solutions to them. Now we're kind of like the shackles are off and we can just run a, you know, run to our heart's desire, wherever our imaginations takes our SMEs to actually develop solutions. Now have the platforms to run them on. Just kind of to close out, we really would be remiss, I've worked with a lot of companies through the year and most of them been spectacular. And you guys are definitely in that category, the whole partnership, as I said, a little bit early, it was really, really well, very responsive. I would be remiss if I didn't thank you guys. So thank you for the opportunity. Doing fantastic. and I'd have to also, I want to thank my guys. my staff, Raul, David worked on this, Richie worked on this, Lex and Tony just, they did a fantastic job and I want to publicly thank them for all the work they did with you guys and Chev, obviously also is fantastic. So thank you everyone. >> Okay. That's a real great example of speed and action. Now let's get into some follow up questions guys, if I may, Rob, can you talk about the specific nature of the relationship between Cloudera and NVIDIA? Is it primarily go to market or are you doing engineering work? What's the story there? >> It's really both. It's both go to market and engineering The engineering focus is to optimize and take advantage of NVIDIA's platform to drive better price performance, lower cost, faster speeds, and better support for today's emerging data intensive applications. So it's really both. >> Great. Thank you. Manuvir, maybe you could talk a little bit more about why can't we just use existing general purpose platforms that are, that are running all this ERP and CRM and HCM and you know, all the, all the Microsoft apps that are out there. What, what do NVIDIA and Cloudera bring to the table that goes beyond the conventional systems that we've known for many years? >> Yeah. I think Dave, as we've talked about the asset that the customer has is really the data, right? And the same data can be utilized in many different ways. Some machine learning, some AI, some traditional data analytics. So, the first step here was really to take a general platform for data processing, Cloudera data platform, and integrate with that. Now NVIDIA has a software stack called rapids, which has all of the primitives that make different kinds of data processing go fast on GPU's. And so the integration here has really been taking rapids and integrating it into a Cloudera data platform so that regardless of the technique the customer is using to get insight from the data, the acceleration will apply in all cases. And that's why it was important to start with a platform like Cloudera rather than a specific application. >> So, I think this is really important because if you think about, you know, the software defined data center brought in, you know, some great efficiencies, but at the same time, a lot of the compute power is now going towards doing things like networking and storage and security offloads. So the good news, the reason this is important is because when you think about these data intensive workloads, we can now put more processing power to work for those, you know, AI intensive things. And so that's what I want to talk about a little bit, maybe a question for both of you, maybe Rob, you could start. You think about AI that's done today in the enterprise. A lot of it is modeling in the cloud, but when we look at a lot of the exciting use cases, bringing real-time systems together, transaction systems and analytics systems, and real-time AI inference, at least even at the edge, huge potential for business value. In a consumer, you're seeing a lot of applications with AI biometrics and voice recognition and autonomous vehicles and the liking. So you're putting AI into these data intensive apps within the enterprise. The potential there is enormous. So what can we learn from sort of where we've come from, maybe these consumer examples and Rob, how are you thinking about enterprise AI in the coming years? >> Yeah, you're right. The opportunity is huge here, but you know, 90% of the cost of AI applications is the inference. And it's been a blocker in terms of adoption because it's just been too expensive and difficult from a performance standpoint. And new platforms like these being developed by Cloudera and NVIDIA will dramatically lower the cost of enabling this type of workload to be done. And what we're going to see the most improvements will be in the speed and accuracy for existing enterprise AI apps like fraud detection, recommendation engine, supply chain management, drug province. And increasingly the consumer led technologies will be bleeding into the enterprise in the form of autonomous factory operations. An example of that would be robots. That AR, VR and manufacturing so driving better quality. The power grid management, automated retail, IOT, you know, the intelligent call centers, all of these will be powered by AI, but really the list of potential use cases now are going to be virtually endless. >> I mean, Manufir, this is like your wheelhouse. Maybe you could add something to that. >> Yeah. I mean, I agree with Rob. I mean he listed some really good use cases, you know, The way we see this at NVIDIA, this journey is in three phases or three steps, right? The first phase was for the early adopters. You know, the builders who assembled use cases, particular use cases like a chat bot from the ground up with the hardware and the software. Almost like going to your local hardware store and buying piece parts and constructing a table yourself right now. Now, I think we are in the first phase of the democratization. For example, the work we do with Cloudera, which is for a broader base of customers, still building for a particular use case, but starting from a much higher baseline. So think about, for example, going to Ikea now and buying a table in a box, right. And you still come home and assemble it, but all the parts are there, the instructions are there, there's a recipe you just follow and it's easy to do, right? So that's sort of the phase we're in now. And then going forward, the opportunity we really look forward to for the democratization, you talked about applications like CRM, et cetera. I think the next wave of democratization is when customers just adopt and deploy the next version of an application they already have. And what's happening is that under the covers, the application is infused by AI and it's become more intelligent because of AI and the customer just thinks they went to the store and bought a table and it showed up and somebody placed it in the right spot. Right? And they didn't really have to learn how to do AI. So these are the phases. And I think we're very excited to be going there. >> You know, Rob, the great thing about, for your customers is they don't have to build out the AI. They can, they can buy it. And just in thinking about this, it seems like there are a lot of really great and even sometimes narrow use cases. So I want to ask you, you know, staying with AI for a minute, one of the frustrations, and Mick I talked about this, the GIGO problem that we've all, you know, studied in college, you know, garbage in, garbage out. But, but the frustrations that users have had is really getting fast access to quality data that they can use to drive business results. So do you see, and how do you see AI maybe changing the game in that regard, Rob, over the next several years? >> So yeah, the combination of massive amounts of data that had been gathered across the enterprise in the past 10 years with an open APIs are dramatically lowering the processing costs that perform at much greater speed and efficiency. And that's allowing us as an industry to democratize the data access while at the same time delivering the federated governance and security models. And hybrid technologies are playing a key role in making this a reality and enabling data access to be quote, hybridized, meaning access and treated in a substantially similar way, irrespective of the physical location of where that data actually resides. >> And that's great. That is really the value layer that you guys are building out on top of all this great infrastructure that the hyperscalers have have given us. You know, a hundred billion dollars a year that you can build value on top of, for your customers. Last question, and maybe Rob, you could, you could go first and then Manuvir, you could bring us home. Where do you guys want to see the relationship go between Cloudera and NVIDIA? In other words, how should we as outside observers be, be thinking about and measuring your project, specifically in the industry's progress generally? >> Yes. I think we're very aligned on this and for Cloudera, it's all about helping companies move forward, leverage every bit of their data and all the places that it may be hosted and partnering with our customers, working closely with our technology ecosystem of partners, means innovation in every industry and that's inspiring for us. And that's what keeps us moving forward. >> Yeah and I agree with Rob and for us at NVIDIA, you know, we, this partnership started with data analytics. As you know, Spark is a very powerful technology for data analytics. People who use Spark rely on Cloudera for that. And the first thing we did together was to really accelerate Spark in a seamless manner. But we're accelerating machine learning. We're accelerating artificial intelligence together. And I think for NVIDIA it's about democratization. We've seen what machine learning and AI have done for the early adopters and help them make their businesses, their products, their customer experience better. And we'd like every company to have the same opportunity.

Published Date : Aug 2 2021

SUMMARY :

And one of the keys to is that the faster we get and the compute needs to follow the data. Now have the platforms to run them on. of the relationship between The engineering focus is to optimize and you know, all the, And so the integration here a lot of the compute power And increasingly the Maybe you could add something to that. from the ground up with the the GIGO problem that we've all, you know, irrespective of the physical location that the hyperscalers have have given us. and all the places that it may be hosted And the first thing we did

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NVIDIAORGANIZATION

0.99+

MickPERSON

0.99+

Rob BeardenPERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

RobPERSON

0.99+

22QUANTITY

0.99+

RaulPERSON

0.99+

Joe AnsaldiPERSON

0.99+

90%QUANTITY

0.99+

RichiePERSON

0.99+

ClouderaORGANIZATION

0.99+

RASORGANIZATION

0.99+

LexPERSON

0.99+

secondQUANTITY

0.99+

IkeaORGANIZATION

0.99+

TonyPERSON

0.99+

first phaseQUANTITY

0.99+

IRSORGANIZATION

0.99+

bothQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

first stepQUANTITY

0.99+

eight timesQUANTITY

0.99+

48 timesQUANTITY

0.99+

second thingQUANTITY

0.99+

ChevPERSON

0.99+

FirstlyQUANTITY

0.98+

three stepsQUANTITY

0.98+

TodayDATE

0.98+

oneQUANTITY

0.98+

three thingsQUANTITY

0.97+

todayDATE

0.97+

firstQUANTITY

0.96+

three phasesQUANTITY

0.95+

ManuvirORGANIZATION

0.95+

first oneQUANTITY

0.95+

ManuvirPERSON

0.95+

ClouderaTITLE

0.93+

around four terabytesQUANTITY

0.93+

first strategyQUANTITY

0.92+

eachQUANTITY

0.91+

last decadeDATE

0.89+

years agoDATE

0.89+

SparkTITLE

0.89+

SMEORGANIZATION

0.88+

Manuvir DasPERSON

0.88+

Ravi Pendekanti, Dell EMC | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. Hey welcome back everybody, Jeff Frick here with theCUBE. We're at Super Computing 2017, Denver, Colorado, 12,000 people talking about big iron, big questions, big challenges. It's really an interesting take on computing, really out on the edge. The key note was, literally, light years out in space, talking about predicting the future with quirks and all kinds of things, a little over my head for sure. But we're excited to kind of get back to the ground and we have Ravi Pendekanti. He's the Senior Vice President of Product Management and Marketing, Server Platforms, Dell EMC. It's a mouthful, Ravi great to see you. Great to see you too Jeff and thanks for having me here. Absolutely, so we were talking before we turned the cameras on. One of your big themes, which I love, is kind of democratizing this whole concept of high performance computing, so it's not just the academics answering the really, really, really big questions. You're absolutely right. I mean think about it Jeff, 20 years ago, even 10 years ago, when people talk about high performance computing, it was what I call as being in the back alleys of research and development. There were a few research scientists working on it, but we're at a time in our journey towards helping humanity in a bigger way. The HPC has found it's way into almost every single mainstream industry you can think of. Whether it is fraud detection, you see MasterCard is using it for ensuring that they can see and detect any of the fraud that can be committed earlier than the perpetrators come in and actually hack the system. Or if you get into life sciences, if you talk about genomics. I mean this is what might be good for our next set of generations, where they can probably go out and tweak some of the things in a genome sequence so that we don't have the same issues that we have had in the past. Right. Right? So, likewise, you can pick any favorite industry. I mean we are coming up to the holiday seasons soon. I know a lot of our customers are looking at how do they come up with the right schema to ensure that they can stock the right product and ensure that it is available for everyone at the right time? 'Cause timing is important. I don't think any kid wants to go with no toy and have the product ship later. So bottom line is, yes, we are looking at ensuring the HPC reaches every single industry you can think of. So how do you guys parse HPC verses a really big virtualized cluster? I mean there's so many ways that compute and store has evolved, right? So now, with cloud and virtual cloud and private cloud and virtualization, you know, I can pull quite a bit of horsepower together to attack a problem. So how do you kind of cut the line between Navigate, yeah. big, big compute, verses true HPC? HPC. It's interesting you ask. I'm actually glad you asked because people think that it's just feeding CPU or additional CPU will do the trick, it doesn't. The simple fact is, if you look at the amount of data that is being created. I'll give you a simple example. I mean, we are talking to one of the airlines right now, and they're interested in capturing all the data that comes through their flights. And one of the things they're doing is capturing all the data from their engines. 'Cause end of the day, you want to make sure that your engines are pristine as they're flying. And every hour that an engine flies out, I mean as an airplane flies out, it creates about 20 terabytes of data. So, if you have a dual engine, which is what most flights are. In one hour they create about 40 terabytes of data. And there are supposedly about 38,000 flights taking off at any given time around the world. I mean, it's one huge data collection problem. Right? I mean, I'm told it's like a real Godzilla number, so I'll let you do the computation. My point is if you really look at the data, data has no value, right? What really is important is getting information out of it. The CPU on the other side has gone to a time and a phase where it is hitting the, what I call as the threshold of the Moore's law. Moore's law was all about performance doubles every two years. But today, that performance is not sufficient. Which is where auxiliary technologies need to be brought in. This is where the GPUs, the FBGAs. Right, right. Right. So when you think about these, that's where the HPC world takes off, is you're augmenting your CPUs and your processors with additional auxiliary technology such as the GPUs and FBGAs to ensure that you have more juice to go do this kind of analytics and the massive amounts of data that you and I and the rest of the humanity is creating. It's funny that you talk about that. We were just at a Western Digital event a little while ago, talking about the next generation of drives and it was the same thing where now it's this energy assist method to change really the molecular way that it saves information to get more out of it. So that's kind of how you parse it. If you've got to juice the CPU, and kind of juice the traditional standard architecture, then you're moving into the realm of high performance computing. Absolutely, I mean this is why, Jeff, yesterday we launched a new PowerEdge C4140, right? The first of it's kind in terms of the fact that it's got two Intel Xeon processors, but beyond that, it also can support four Nvidia GPUs. So now you're looking at a server that's got both the CPUs, to your earlier comment on processors, but is augmented by four of the GPUs, that gives immense capacity to do this kind of high performance computing. But as you said, it's not just compute, it's store, it's networking, it's services, and then hopefully you package something together in a solution so I don't have to build the whole thing from scratch, you guys are making moves, right? Oh, this is a perfect lead in, perfect lead in. I know, my colleague, Armagon will be talking to you guys shortly. What his team does, is it takes all the building blocks we provide, such as the servers, obviously looks at the networking, the storage elements, and then puts them together to create what are called solutions. So if you've got solutions, which enable our customers to go back in and easily deploy a machine-learning or a deep-learning solution. Where now our customers don't have to do what I call as the heavy lift. In trying to make sure that they understand how the different pieces integrate together. So the goal behind what we are doing at Dell EMC is to remove the guess work out so that our customers and partners can go out and spend their time deploying the solution. Whether it is for machine learning, deep learning or pick your favorite industry, we can also verticalize it. So that's the beauty of what we are doing at Dell EMC. So the other thing we were talking about before we turned turned the cameras on is, I call them the itys from my old Intel days, reliability, sustainability, serviceability, and you had a different phrase for it. >> Ravi: Oh yes, I know you're talking about the RAS. The RAS, right. Which is the reliability, availability, and serviceability. >> Jeff: But you've got a new twist on it. Oh we do. Adding something very important, and we were just at a security show early this week, CyberConnect, and security now cuts through everything. Because it's no longer a walled garden, 'cause there are no walls. There are no walls. It's really got to be baked in every layer of the solution. Absolutely right. The reason is, if you really look at security, it's not about, you know till a few years ago, people used to think it's all about protecting yourself from external forces, but today we know that 40% of the hacks happen because of the internal, you know, system processes that we don't have in place. Or we could have a person with an intent to break in for whatever reason, so the integrated security becomes part and parcel of what we do. This is where, with in part of a 14G family, one of the things we said is we need to have integrated security built in. And along with that, we want to have the scalability because no two workloads are the same and we all know that the amount of data that's being created today is twice what it was the last year for each of us. Forget about everything else we are collecting. So when you think about it, we need integrated security. We need to have the scalability feature set, also we want to make sure there is automation built in. These three main tenets that we talked about feed into what we call internally, the monic of a user's PARIS. And that's what I think, Jeff, to our earlier conversation, PARIS is all about, P is for best price performance. Anybody can choose to get the right performance or the best performance, but you don't want to shell out a ton of dollars. Likewise, you don't want to pay minimal dollars and try and get the best performance, that's not going to happen. I think there's a healthy balance between price performance, that's important. Availability is important. Interoperability, as much as everybody thinks that they can act on their own, it's nearly impossible, or it's impossible that you can do it on your own. >> Jeff: These are big customers, they've got a lot of systems. You are. You need to have an ecosystem of partners and technologies that come together and then, end of the day, you have to go out and have availability and serviceability, or security, to your point, security is important. So PARIS is about price performance, availability, interoperability, reliability, availability and security. I like it. That's the way we design it. It's much sexier than that. We drop in, like an Eiffel Tower picture right now. There you go, you should. So Ravi, hard to believe we're at the end of 2017, if we get together a year from now at Super Computing 2018, what are some of your goals, what are your some objectives for 2018? What are we going to be talking about a year from today? Oh, well looking into a crystal ball, as much as I can look into that, I thin that-- >> Jeff: As much as you can disclose. And as much as we can disclose, a few things I think are going to happen. >> Jeff: Okay. Number one, I think you will see people talk about to where we started this conversation. HPC has become mainstream, we talked about it, but the adoption of high performance computing, in my personal belief, is not still at a level that it needs to be. So, if you go down next 12 to 18 months, lets say, I do think the adoption rates will be much higher than where we are. And we talk about security now, because it's a very topical subject, but as much as we are trying to emphasize to our partners and customers that you've got to think about security from ground zero. We still see a number of customers who are not ready. You know, some of the analysis show there are nearly 40% of the CIOs are not ready in helping and they truly understand, I should say, what it takes to have a secure system and a secure infrastructure. It's my humble belief that people will pay attention to it and move the needle on it. And we talked about, you know, four GPUs in our C4140, do anticipate that there will be a lot more of auxiliary technology packed into it. Sure, sure. So that's essentially what I can say without spilling the beans too much. Okay, all right, super. Ravi, thanks for taking a couple of minutes out of your day, appreciate it. = Thank you. All right, he's Ravi, I'm Jeff Frick, you're watching theCUBE from Super Computing 2017 in Denver, Colorado. Thanks for watching. (techno music)

Published Date : Nov 16 2017

SUMMARY :

and the massive amounts of data that you and I Which is the reliability, because of the internal, you know, and then, end of the day, you have to go out Jeff: As much as you can disclose. And we talked about, you know, four GPUs in our C4140,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Ravi PendekantiPERSON

0.99+

40%QUANTITY

0.99+

RaviPERSON

0.99+

PARISORGANIZATION

0.99+

2018DATE

0.99+

one hourQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

12,000 peopleQUANTITY

0.99+

MasterCardORGANIZATION

0.99+

C4140COMMERCIAL_ITEM

0.99+

NvidiaORGANIZATION

0.99+

twiceQUANTITY

0.99+

eachQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

bothQUANTITY

0.99+

ArmagonORGANIZATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

about 20 terabytesQUANTITY

0.99+

Denver,LOCATION

0.98+

oneQUANTITY

0.98+

IntelORGANIZATION

0.98+

yesterdayDATE

0.98+

about 38,000 flightsQUANTITY

0.98+

early this weekDATE

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

endDATE

0.97+

Eiffel TowerLOCATION

0.97+

10 years agoDATE

0.97+

nearly 40%QUANTITY

0.96+

twoQUANTITY

0.95+

20 years agoDATE

0.95+

18 monthsQUANTITY

0.95+

three main tenetsQUANTITY

0.94+

firstQUANTITY

0.94+

fourQUANTITY

0.93+

Super Computing '17EVENT

0.92+

OneQUANTITY

0.92+

every two yearsQUANTITY

0.92+

Super Computing 2017EVENT

0.91+

12QUANTITY

0.89+

2017DATE

0.88+

few years agoDATE

0.86+

MoorePERSON

0.86+

XeonCOMMERCIAL_ITEM

0.85+

Western DigitalORGANIZATION

0.84+

about 40 terabytes of dataQUANTITY

0.83+

Super Computing 2018EVENT

0.82+

two workloadsQUANTITY

0.81+

dualQUANTITY

0.76+

a yearQUANTITY

0.74+

a ton of dollarsQUANTITY

0.74+

14GORGANIZATION

0.7+

singleQUANTITY

0.66+

every hourQUANTITY

0.65+

ground zeroQUANTITY

0.64+

HPCORGANIZATION

0.6+

ColoradoLOCATION

0.56+

doublesQUANTITY

0.56+

PORGANIZATION

0.54+

CyberConnectORGANIZATION

0.49+

theCUBEORGANIZATION

0.49+

RASOTHER

0.34+