Garima Kapoor, MinIO | KubeCon + CloudNativeCon NA 2022
>>How y'all doing? My name's Savannah Peterson, coming to you from Detroit, Michigan, where the cube is excited to be at Cube Con. Our guest this afternoon is a wonderfully brilliant woman who's been leading in the space for over eight years. Please welcome Gar Kapur. Gar, thanks for being with us. >>Well, thank you for having me to, It's a pleasure. Good >>To see you. So, update what's going on here? Co saw you at VMware Explorer. Yes. Welcome back to the Cube. Yes. What's, what's going on for you guys here? What's the message? What's the story >>Soupcon like I always say, it's our event, it's our audience. So, you know, Minayo, I dunno if you've been keeping track, Mani ha did reach like a billion docker downloads recently. So >>Congratulations. >>This is your tribe right here. Yes, >>It is. It is. Our >>Tribe's native infrastructure. Come on. Yes. >>You know, this audience understands us. We understand them. You know, you were asking when did we start the company? So we started in 2014, and if you see, Kubernetes was born in 2015 in all sorts of ways. So we kind of literally grew up together along with the Kubernetes journey. So all the decisions that we took were just, you know, making sure that we addressed the Kubernetes and the cloud native audiences, the first class citizens when it comes to storage. So I think that has been very instrumental in leading us up to the point where we have reached a billion docker downloads and we are the most loved object storage out >>There. So, So do you like your younger brother Kubernetes? Or not? Is this is It's a family that gets along. >>It does get along. I think in, in Kubernetes space, what we are seeing from customer standpoint as well, right? They're warming up to Kubernetes and you know, they are using Kubernetes as a framework to deploy anything at scale. And especially when you're, you know, offering storage as a service to your, whether it is for your internal audience or to the external audience, Kubernetes becomes extremely instrumental because it makes Multitenancy extremely easy. It makes, you know, access control points extremely easy for different user sets and so on. Yeah. So Kubernetes is definitely the way to go. I think enterprises need to just have little bit more skill set when it comes to Kubernetes overall, because I think there are still little bit areas in which they need to invest in, but I think this is the right direction, This is the right way. If you, if you want multi-tenant, you need Kubernetes for compute, you need Kubernetes for storage. So >>You guys hit an interesting spot here with Kubernetes. You have a product that targets builders. Yes. But also it's a service that's consumed. >>Yes. Yes. >>How do you see those two lanes shaping out as the world starts to grow, the ecosystems growing, You've got products for builders and products for people who are developers consuming services. How do you see that shaking out? Is just, is there intersections there? There is. You seem to be hitting that. >>There is. There is definitely an intersection. And I think it's getting merged because a lot of these users are the ones who dictate what kind of stack they want as part of their application ecosystem overall, right? So that is where, when an application, for example, in the big data workloads, right? They tell their IT or their storage department, this is the S3 compatible storage that they want their applications to run on or sit on. So the bridges definitely like becoming very narrow in that way from builders versus the service consumers overall. And I think, you know, at the end of the day, people need to get their job done from application users perspective. They want to just get in and get out. They don't want to deal with the underlying complexity when it comes to storage or any of the framework, right? So I think what we enable is for the builders to make sure they have extremely easy, simple, high performance software service that they can offer it to their customers, which is as three compatible. So now they can take their applications wherever they need to go, whether it is edge, whether it is on-prem, whether it is any of the public cloud, wherever you need to be, go be with it. With >>Mei, I mean, I wanna get your thoughts on a really big trend that's happening now. That's right. In your area of expertise. That is people are realizing that, hey, I don't necessarily need AWS S3 for storage. I gotta do my own storage or build my own. So there's a cost slash value for commodity storage. Yes. When does a company just dive to what to do there? Do they do their own? You see, CloudFlare, you seeing Wasabi, other companies? Yes. Merging. You guys are here. Yeah, yeah. Common services then there's a differentiator in the cloud. What's the, what's this all about? >>Yeah, so there are a couple of things going on in this space, right? So firstly, I think cloud model is the way to go. And what, what we mean by cloud is not public cloud, it's the cloud operating model overall, right? You need to build the applications the correct way so that they can consume cloud native infrastructure correctly. So I think that is what is going on. And secondly, I think cloud is great for your burst workloads. It's all about productivity. It's all about getting your applications to the market as fast as you can. And that is where of course, MIN IO comes into play when you know you can develop your applications natively on something like mania. And when, when you take it to production, it's very easy no matter where you go. And thirdly, I think when it comes to the cost perspective, you know, what we offer to the customers is predictability of the cost and no surprise in the builds when it comes, which is extremely important to like a CFO of a company because everyone knows that cloud is not the cheapest place to run your sustainable workloads. And there is unpredictability element involved because, you know, people leave their buckets on, people leave their compute nodes on it, it happens all the time. So I think if you take that uncertainty out of it and have more predictability around it, I think that is, that is where the true value lies. >>You're really hitting on a theme that we've been hearing a lot on the cube today, which is standardization, predictability. Yes. We, everyone always wants to move fast, but I think we're actually stepping away from that Mark Zuckerberg parity, move fast and break things and let's move fast, but know how much it's gonna cost and also decrease the complexity. Drugs >>Don't things. >>Yeah, yeah, yeah, exactly. And try, you know, minimize the collateral damage when Yeah. I, I love that you're enabling folks like that. How is, I'm curious because I see that your background, you have a PhD in philosophy, so we don't always see philosophy and DevOps and Kubernetes in the same conversation. Yeah. So how does this translate into your leadership within your team and the, And Min i's culture, >>So it's PhD in financial management and financial economics. So that is where my specialization lies. And I think after that I came to Bay Area. So once you're in Bay Area, you cannot escape technology. It is >>To you, >>It is just the way things are. You cannot escape startups, you cannot escape technology overall. So that's how I got introduced to it. And yeah, that it has been a great journey so far. And from the culture standpoint of view, you know, I always tell like if I can learn technology, anyone can learn technology. So what we look for is the right attitude, the right kind of, you know, passion to learn is what is most important in this world if you want to succeed. And that's what I tell everyone who joins the, who joins win I, two months, three months, you'll be up and going. I, I'm not too worried about it. >>But pet pedigree doesn't always play into it because no, the changing technology you could level up. So for sure you get into those and be contributing. >>I think one of the reasons why we have been successful the way we have been successful with storage is because we've not hired storage experts. Because they come with their own legacy and mindset of how to build things. And we are like, and we always came from a point of view, we are not a storage company. We are a data company and we want to be close to the data. So when you come to that mindset, you build a product directly attacking data, not just like, you know, in traditional appliance world and so on, so forth. So I think those things have been very instrumental in terms of getting the right people on board, making sure that they're very aligned with how we do things and you know, the dnf, the company's, >>That's for passion and that's actually counterintuitive, but it's makes sense. Yes. In new markets it doesn't always seem to take the boiler plate. Yes. Skill set or person. No, we're doing journalism, but we don't hire journalists. No, >>I mean you gotta be, It's adventurers. It is. It's curious. >>Exactly. Exactly. Yeah, I, yeah, I think also, you know, for you to disrupt any space, you cannot approach it from how they approach the problem. You need to completely turn the tables upside down as they say, right? You need to disrupt it and have the surprise element. And I think that is what always makes a technology very special. You cannot follow the path that others have followed. You need to come from a different space, different mindset altogether. So that is where it's important that you, like you said, adventurous are the people >>That that is for sure. Talk to us about the company. Are you growing scaling? How do people find out more? >>Oh yeah, for sure. So people can find out more by visiting our website. Min dot i, we are growing. We just closed last year, end of last year we closed our CDC round unicorn valuation and so on, so forth. So >>She says unicorn valuation, so casually, I just wanna point that out, that, that, that, that's funny. Like a true strong female leader. I love that. I >>Love that. Thank you. Yes. So in terms of, you know, in terms of growth and scalability, we are growing the team. We are, you know, onboarding more commercial customers to the platform. So yeah, it's growth all across growth from the community standpoint, growth from commercial number standpoint. So congratulations. Yeah, thank you. >>Yeah, that's very exciting. Grma, thank you so much for being, >>Being with us. Thank you for >>Having me. Always. Thanks for hanging out and to all of you, thank you so much for tuning into the Cube, especially for this exciting edition for all of us here in Detroit, Michigan, where we're coming to you from Cuban. See you back here in a little bit.
SUMMARY :
My name's Savannah Peterson, coming to you from Detroit, Well, thank you for having me to, It's a pleasure. What's, what's going on for you guys here? So, you know, This is your tribe right here. It is. Yes. So all the decisions that we took were just, you know, making sure that we addressed the Kubernetes and the cloud Is this is It's a family that gets along. you know, offering storage as a service to your, whether it is for your internal audience or to the external audience, You have a product that targets builders. How do you see those two lanes shaping out as the world starts to grow, the ecosystems growing, And I think, you know, at the end of the day, people need to get their job done You see, CloudFlare, you seeing Wasabi, other companies? I think when it comes to the cost perspective, you know, what we offer to the but know how much it's gonna cost and also decrease the complexity. And try, you know, minimize the collateral damage when Yeah. And I think after that I came to Bay Area. And from the culture standpoint of view, you know, I always tell like if I can learn technology, But pet pedigree doesn't always play into it because no, the changing technology you could level So when you come to that mindset, In new markets it doesn't always seem to take the boiler plate. I mean you gotta be, It's adventurers. for you to disrupt any space, you cannot approach it from how they approach the problem. Are you growing scaling? So people can find out more by visiting our website. I love that. you know, onboarding more commercial customers to the platform. Grma, thank you so much for being, Thank you for in Detroit, Michigan, where we're coming to you from Cuban.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2015 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Bay Area | LOCATION | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
three months | QUANTITY | 0.99+ |
two months | QUANTITY | 0.99+ |
Minayo | PERSON | 0.99+ |
Garima Kapoor | PERSON | 0.99+ |
two lanes | QUANTITY | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Gar Kapur | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Gar | PERSON | 0.99+ |
CloudNativeCon | EVENT | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Kubernetes | PERSON | 0.98+ |
Cuban | LOCATION | 0.98+ |
Wasabi | ORGANIZATION | 0.98+ |
Detroit, Michigan | LOCATION | 0.98+ |
one | QUANTITY | 0.97+ |
over eight years | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
MIN IO | TITLE | 0.97+ |
today | DATE | 0.97+ |
CDC | ORGANIZATION | 0.94+ |
firstly | QUANTITY | 0.94+ |
VMware Explorer | ORGANIZATION | 0.93+ |
Grma | PERSON | 0.9+ |
end of last year | DATE | 0.9+ |
a billion docker downloads | QUANTITY | 0.9+ |
thirdly | QUANTITY | 0.86+ |
this afternoon | DATE | 0.86+ |
S3 | TITLE | 0.85+ |
Cube Con. | EVENT | 0.82+ |
NA 2022 | EVENT | 0.82+ |
Mani | PERSON | 0.81+ |
MinIO | ORGANIZATION | 0.76+ |
secondly | QUANTITY | 0.74+ |
first class | QUANTITY | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.65+ |
billion docker | QUANTITY | 0.59+ |
DevOps | TITLE | 0.53+ |
CloudFlare | ORGANIZATION | 0.53+ |
Soupcon | ORGANIZATION | 0.43+ |
Garima Kapoor, Minio | VMware Explore 2022
>>Hey, welcome back everyone. Through the cubes coverage of VMware Explorer, 22, I'm John Fett, Dave ante, formerly world, our 12th year extracting the signal from the noise. A lot of great guests. It's very vibrant right here. The floor's great. The expo halls booming, the keynotes went great. We just had a keynote announce. So our next first guest here on day one is car Capor C co-founder and COO min IO. Welcome to the cube. Thanks for joining us. >>Thank you for having >>Me. You're also angel investor of variety of companies of Q alumnis and been in the valley for a long time. Thanks for coming on sharing. What's going on. So, first of all, obviously VMware still on the wave. They've always been relevant and they've always been part of it. Yes. But as that's changing a lot's going on security data's big conversation. Yeah. And now with their multi-cloud we call super cloud. But their multi-cloud it's it's about hyperscaler participation. Yes. Yes. Cloud universal. Yes. It's clear that VMware has to be successful in every cloud. Okay. And that's really important. And storage is one of it. You guys do that? So talk about how you guys relate with min IO, the vision, how that connects with what's happening here. >>Yeah. So like you already said, right? Most of the enterprises are become data enterprises in itself and storage is a foundation layer of how, and you do need a system that is simple, scalable, and high perform it at scale. Right? So that's where min IO fits into the picture. And we are software defined, open source. So, you know, like VMware has traditionally been focused on enterprise it, but that world is fast changing. They are making a move in terms, developer first approach and min IO, because it's open source. It's simple enough to start, get, start deploying object storage and cloud native applications on top. So that's where we come in. We have around 1.3 million DACA downloads a day. So we own the developer market overall. And that is where I feel the partnership with VMware as they are coming into multi-cloud on their own min IO is a foundational layer. >>So just to elaborate on it, whenever you talk about multi-cloud, there are two pieces to it. One is the compute side and one is on the storage side. So compute Kubernetes takes care of the compute sites. Once you containerize an application, you can deploy it any cloud, but the data has gravity and all the clouds that you see AWS, your Google cloud, they're inherently incompatible with each other. So you need a consistent storage layer with industry standard APIs that you can just deploy it around with your application without a single line of code change. So that's what we >>Do. Oh, so you got a great value proposition, love the story. So just kind of connect on something. So we heard the keynote today. We gotta win the developers. They didn't say that, but they said, they said that they have the ops lockdown, but DevOps is now the new developer. Yes. We've been covering a lot of the poop coupon as you know, and shifting left everyone's in the C I C D pipeline. So developers are driving all the action and it has to be self-service. Absolutely. It has to be high velocity. Can't be slow. Yes. Gotta be fast. So that sounds like you're winning that piece. >>Yes. Yes. And I think more than that, what is most important is it needs to be simple. It needs to get your job done in a very simple and efficient way. And I think that is very important to the developers overall. They don't like complex appliances or complex piece of software. They just want to get their job done and move on the next thing in order to build their application and deploy it successfully. So whatever you do, it needs to be very simple. And of course, you know, it needs to be feature rich and high performant and whatnot that comes with the, with the flow in itself. But I think simplicity is what wins, the developers, hearts and minds overall. >>So object storage always been simple, get put right. Pretty simple, you know, paradigm. Yes. But it was sort of the backwater before, you know, Amazon, you know, launched. Yes. You know, it's cloud. How have you seen object evolve? You mentioned performance. So I presume yes. Yes. You're not just for cheap and deep you're for cheap bin performance. So you could describe that a little bit if you would, >>For, for sure. Like you mentioned, right. When AWS was launched, S3 was the foundation layer. They launched S3 first and then came everything else around it. So object storage is the foundation of any cloud that you go with. And over a period of time, when we started the company back in 20 end of 2014, beginning 2015, it was all about cheap and deep storage. You know, you just get, put it into one basket, but over years, if you see, because the scale of data has increased quite a bit, new applications have emerged as well. That require high performance. That is where we partnered very closely with Intel early on. And I have to give it to them. Intel was the one who convinced us that you need to do high performance. You need to optimize your software with all the AVX five, 12 instruction set and so on. >>So we partnered very closely with them and we were the first one to come up with, you know, you need high performance, object storage and that in collaboration with Intel. So that's something that we take a lot of pride in, in terms of being the leader in that direction of bringing high performance object storage to the market, especially for big data workloads, AI ML, workloads, they're all object first, like even, you know, new age applications like snowflake and data bricks, they are not built on sand or file system. Right. They're all built on object storage rates. So that's where the, you need >>Performance. And I think the, I think the data bricks, snowflake examples. Good. And then you mentioned in 2014, when you started yes. At that time, big data was Hudu and you know, data, legs, data swamp. Yes. Yes. But the ones that were successful, the ones who optimize had the right bets, like you guys. Yeah. Now we're in an era. Okay. I gotta deploy this. So you got great downloads and update from developers. Now we see ops struggling to keep up yes. With the velocity of the development cycle. Yes. And with DevOps driving the cloud native yeah. Security data ops becomes important. Okay. Exactly. Security and data. A lot with storage going on there. Yes. How do you guys see that emerging? Cuz that becomes a lot of the conversations now in the architecture of the ops teams. I want to be supportive in enablement of dev. Yes. Yes. Do you guys target that world too? Or >>Yeah, we, we do target that. So the good thing about object storage is that if you look at the architecture in itself, it's very granular in terms of the controls that it can give to the end user. Right? So you can really customize in terms of, you know, what objects need to be accessible to whom what kind of policies you need to implement on the bucket level, what kind of access controls and provisions that you need to do. And especially like with ransomware attacks and what not, you can enable immutability and so on, so forth. So that's an important part of it. Especially I think the ransomware threats have increased quite a bit, especially with, you know, the macro, you know, situation with war and stuff. So we see that come up quite a bit. And that's where I think, you know, the data IU immutability, the data governance and compliance becomes extremely, extremely important for organizations. So we, we are partnering very closely with a lot of big organizations just for this use case itself. >>So how's it work if I want to build some kind of multi-cloud whatever X, right. Okay. I, I can use S three APIs or Azure blah. Okay. And I, and are all different. Yes. But if I want to use min IO, what's the experience like describe how I go about doing >>So if you've had any experience working with AWS, you don't need to even change a single line of code with us. You can just bring your applications directly onto min IO and it just behaves and act same way transparently what you would've experienced in AWS. Now you can just lift and shift that application and deploy it wherever you need it to be. Whether it is Azure, blah, whether it is Google cloud or even on edge. Like what we are seeing is that data is getting generated outside of public cloud. And most of the data that, you know, the emerging trend is that we see that data gets generated on edge quite a bit, whether it is autonomous cars, whether it is IOT, manufacturing units and so on. And you cannot push all that data back in the central cloud, it's extremely expensive for bandwidth and latency reasons. >>So you need to have an environment that looks and feels exactly what you have experienced at the central cloud on the edge itself. So a lot of our use cases are also getting deployed with Mani on the edge itself, whether it is on top of VMware because of the footprint of that VMware has within all these organizations itself. So we see that emerging quite a bit as well. And then you can tier the data off to any cloud, whether it is mid IO cloud, whether it is AWS, Azure, Google cloud, and so on. So you can have like a true multi-cloud environment. >>So you would follow VMware to the edge and be the object store there, or not necessarily if it's not VMware Kubernetes or whatever. >>Exactly. Exactly. Depending on the skill set that the organization has within, within their setup, if their DevOps savvy Kubernetes is becomes a very natural choice. If they are traditional enterprise, it, VMware is an ideal choice. So yeah. >>So you're seeing a lot of edge action you're saying, and we, >>We, we have seen starting it increasing yes. And >>Are customers. So they're persisting data at the edge. Yes. Yes they >>Are. Okay. >>It's not just the femoral and >>No, they are not because what the cost of putting all the data through bandwidth is extremely expansive to push all the data in central cloud and then process it and then store it. So we see that the data gets persisted on edge cloud as well in terms of processing and only the data that you need for, for the processing through whatever application systems that you, whether it is snowflake or data, bricks and whatnot, you know, you choose what applications from compute side, you want to bring on top of storage. And that can just seamlessly and transparently work. Yeah. >>Maria, you were saying that multi-cloud yeah. Games around Kubernetes. You, yes. That Kubernetes is all about multi-cloud that's the game. >>Yes. >>Yes. Can you explain what you mean by that? Why is multi-cloud a Kubernetes game? >>So multi-cloud has two foundations to it. One is the compute side. Another one is the storage side. Compute Kubernetes makes it extremely simple to deploy any application that is containerized. Once you containerize an application, it's no longer tied to the underlying infrastructure. You can actually deploy it no matter where you go. So Kubernetes makes that task extremely easy. And from storage standpoint, you know, the state of applications need to be held somewhere. You know, it's it, people say it's cloud, but it's computer somewhere. Right? So >>Exactly it's the >>Container. It needs, it needs to be stored somewhere. So that's where, you know, storage systems like man IO come into play where you can just take the storage and deploy it wherever you go. So it gets tightly bound with application itself, just like Kubernetes is for compute. Mano is for storage. >>I saw Scott Johnson, the CEO of Docker in Palo Alto last week did yeah. The spring to his step. So to speak Dockers doing pretty well as a result, they got, you know, starting to see certifications. Yes. So people are really rallying around containers in a more open way. Yes. But that's open source, but it's the Kubernetes, that's the action. Absolutely. That the container's really there now Docker's got a great business. Yes. Right now going yes. With how they're handling. I thought they did a great job. Yeah. But the Docker's now lingua Franco, right? Yes. That's the standard. It >>Is. It is. And I think where Kubernetes really makes it easy is in terms of when the scale is involved. Right. If there are, if the scale is small, it's okay. You can, you can work around it. But Kubernetes makes it extremely simple. If you have the right Kubernetes skill, I just need to put a disclaimer around there because not lot of people are Kubernetes expert, at least not yet. So if you have the expertise, Kubernetes makes the task extremely simple, predictable and automate and automated scale. I think that is what is >>The, so take me through a use case, cuz I've talked to a lot of enterprises, multiple versions, we're lifting and shifting to the cloud, that's kind of the, you know, get started, get your feet wet. Yes. Then there's like, okay, now we're refactoring really doing some native development and they're like, we don't have a staff on Kubernetes. We do a managed service. Yeah. So how does, how do you see that evolution piece taking place? Cause that's a critical adoption component as they start figuring out their Kubernetes relationship yes. To compute yes. How they roll it out. Yes. How do you see that playing out as a big part of this growth for a customer? >>Yeah. So we see a mix, you know, we see organizations that are born within cloud. Like they have just been in mono cloud like AWS. Now they are thinking about two things, right. With the economy being, you know, and the state that it is, they're getting hurt on the margin. Some of the SaaS companies that were born in cloud. So they are now actively thinking in terms of what mode they can do to bring the cost down. So they are partnering with min IO either to, you know, be in a colocation at Equinix, like data centers or go to other clouds to optimize for the compute modes and so on. So that's one thing that we see increasingly amongst enterprise. Second thing that we see is that because you know of that whole multi-cloud and cloud does go down, it's not like it, you know, and it's been evident over the last year or so that, you know, we've seen instances where Amazon was down or Google cloud was down. So they want to make sure that the data is available across the clouds in a consistent way. So with man IO, with the active, active application and so on, you can make the data available across the cloud. So your applications, even if one cloud is down for Dr. Purposes and so on, you can, you know, transparently, move the applications to another cloud and make sure that your business is not affected. So from business continuity reasons as well, the customers are partnering with us. So like I said, it's a mix. >>So the Tansu, you know, 1.3, the application development platform that we heard in the keynotes this morning, critical, you have to have that for cross cloud services. If you don't have a consistent experience, absolutely forget it. I mean it's table stake. Absolutely. But there's a lot of chatter on Twitter. A lot of skepticism that VMware can appeal to developers, some folk John as well chimed in saying, well, you know, it's, don't forget about the op side of the equation as well. They need security and consistency. Yes. What are you seeing in the marketplace in terms of VMware, specifically their customers and, and what do you, what do you, how do you rate their chances in terms of them being able to track the developer crowd, your, your peeps? >>Yeah. So VMware has a very strong hold on enterprise. It, you know, you have to give it to them. I don't come across any organization that does not have VMware, you know, for, with 500,000 customers. Right. Right. So they have done something really right for themselves. And if you have such a strong hold on the customers, it's not that hard to make the transition over to the developer mindset as well. And that is where with VMware partnership with partners like us, they can make, make that jump happen. So we partnered with them very closely for the data persistence layer and they wanted to bring Kubernetes the VMware tan natively to the VSAN interface itself. So we partnered with them, you know, we were their design partner and in, I think, 2020 or something, and we were their launch partner for that platform service. So now through the vCenter itself, you can provision object storage as a service for the developers. So I think they are working in terms of bridging the gap and they have the right mindset. It's all about execution like this. Right. >>They gotta get it >>Justed >>And it's the execution and timing. Exactly. And if they overshoot and the, it shifts over here, you know, this comes up a lot in our conversations. I want to get your reaction to this because I think that's a really great point. You guys are a nice foundational element. Yes. For VMware that plugs into them. That makes everything kind of float for them. Yes. Now we would, we were comparing OpenStack back in the day, how that had so much promise. Yes it did. If you remember, and storage was a big part of that conversation. It, it did. But the one thing that a lot of people didn't factor in on those industry discussions was Amazon was just ramping. Yes. So assuming that the hyper scales aren't stopping, innovating. Yeah. How does the multi-cloud fit with the constant struggles? Cuz abs is not rah multi-cloud cause they're there for the cloud, but customers are using Azure for yeah. Say office productivity teams or whatever, and then they have apps over here and then I'll see on private, private. Right. So hybrids there we get hybrid. Yeah. The clouds aren't changing. Yes. How does that change the dynamics in the market? Because it's a moving train. Some say, >>You know, it is, I would not characterize it like that because you know, AWS strength is that it is AWS, but also that it is not outside of AWS. Right. So it comes with the strengths and weaknesses and same goes for Azure. And same goes for Google cloud where VMware strength lies is the enterprise customers that it has. And I think if they can bridge the gap between the developers, enterprise customers and also the cloud, I think they have a really fair shot at, you know, making sure that the organizations and enterprise have the right experiences in terms of, you know, everyone needs to innovate. There is just no nothing that you can just sit back and relax. Everyone needs to innovate. And I think the good part about VMware is the partnership ecosystem that they have developed over the years and also making sure that their partners are successful along with them. And I think that is, that is going to be a key determining factor in terms of how well and how fast they can execute because nobody can do it alone in, in the enterprise world. So I think that that would be the >>Key, well, gua you're a great guest. Thanks for coming on and sharing you for having perspective on the cube. And obviously you've been on a, this from day 1, 20 15. Yes. I mean that's early and you guys made some great moves. Thank you. In a great position with VMware. Thank you. I like how you're the connective tissue and bridge to developers without a lot of disruption. Right? Real enablement. I think the question is can the VMware customers get there? So congratulations. No, thank you. And we got a couple minutes left. Take a minute to explain what's going on with the company that you co-founded, the team what's going on. Any updates funding very well, well funded. Yeah. How many people do you have? What's new. Are you gonna hire where take a minute to give the plug, give the commercial real quick >>For sure. So we started in 24 15, so it has been like seven, eight years now that we are at it. And I think we've been just very focused with the S3 compatible object storage, being AWS S3 for rest of the world. Like we get characterized at and over the years we've been like now we, we are used 60% in fortune 500 companies in some shape or format. So in terms of the scale and growth, we couldn't be more happier. We are about to touch a billion dollar billion Docker downloads in September. So that's something that we, we are very excited about. And in terms of the funding, we closed the, our series B sometime I think end of December last year and it's a billion dollar valuation and we have great partners in Intel capital and Dell ventures and soft bank. So we couldn't be in a more happier >>Spot. You're a unicorn soon to be decor. Right. >>What's next? Yes. I think, I think what is exciting for us is that the market, we could not be more happier with how the market is coming together with our vision, what we saw in 2015 and how everything is coming together nicely with, from the, the organization, realizing that multi-cloud is the core foundation and strategy of whatever they do next and lot has been accelerated due to COVID as well. Yeah. So in those terms, I think from market and product alignment, we just couldn't be more happier. >>Yeah. We think multi-cloud hybrids here. Steady state multi-cloud is gonna be a reality. Yeah. It becomes super cloud with the new dynamics. And again, David and I were talking last night, storage, networking, compute never goes away, never goes the operating. System's still gonna be out there. Just gonna be looked different and that >>Differently. Yes. I mean, yeah. And like, you know, in 10 years from now, Kubernetes might or might not be there as the foundation for, you know, compute, but storage is something that is always going to be there. People still need to persist the data. People still need a performance data store. People still need something that can scale to hundreds and hundreds of petabytes. So we are here. You bet against data >>As indie gross head once, you know, let chaos rain, rain in the chaos. There you go. Chaos cloud is gonna be simplified. Yeah. That's what innovation looks like. That's, >>That's what it is. >>Thanks for coming on the queue. Appreciate thank you for having me more coverage here. I'm John furrier with Dave Alane. Thanks for watching. More coverage. Three days just getting started. We'll be right back.
SUMMARY :
So our next first guest here on day one is car Capor So talk about how you guys relate with and storage is a foundation layer of how, and you do need a system that is simple, So just to elaborate on it, whenever you talk about multi-cloud, there are two pieces to it. as you know, and shifting left everyone's in the C I C D pipeline. And of course, you know, it needs to be feature rich and high performant and whatnot that comes with the, So you could describe that a little bit if you would, So object storage is the foundation of any cloud that you go with. So we partnered very closely with them and we were the first one to come up with, you know, you need high performance, So you got great downloads and update from developers. So the good thing about object storage is that if you look at So how's it work if I want to build some kind of multi-cloud whatever X, right. And most of the data that, you know, the emerging trend is that we see that data gets generated So you need to have an environment that looks and feels exactly what you have experienced at the central cloud on So you would follow VMware to the edge and be the object store there, or not necessarily if So yeah. We, we have seen starting it increasing yes. So they're persisting data at the edge. data that you need for, for the processing through whatever application systems that you, Maria, you were saying that multi-cloud yeah. Why is multi-cloud a Kubernetes game? And from storage standpoint, you know, the state of applications need to be held somewhere. So that's where, you know, So to speak Dockers doing pretty well as a result, they got, you know, starting to see certifications. So if you have the expertise, Kubernetes makes the task extremely So how does, how do you see that evolution piece taking With the economy being, you know, and the state that it is, they're getting hurt on the margin. So the Tansu, you know, 1.3, the application development platform that we heard in the keynotes So we partnered with them, you know, we were their design partner and So assuming that the hyper scales aren't stopping, innovating. the cloud, I think they have a really fair shot at, you know, Take a minute to explain what's going on with the company that you co-founded, the team what's going on. So in terms of the scale and growth, we couldn't be more happier. Right. So in those terms, I think from market and product alignment, we just couldn't be more happier. networking, compute never goes away, never goes the operating. And like, you know, As indie gross head once, you know, let chaos rain, rain in the chaos. Appreciate thank you for having me more coverage here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Dave Alane | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
2014 | DATE | 0.99+ |
Maria | PERSON | 0.99+ |
Garima Kapoor | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Fett | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Scott Johnson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
20 end of 2014 | DATE | 0.99+ |
12th year | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Three days | QUANTITY | 0.99+ |
500,000 customers | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
one thing | QUANTITY | 0.99+ |
Second thing | QUANTITY | 0.99+ |
12 instruction | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
last year | DATE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
John furrier | PERSON | 0.98+ |
today | DATE | 0.98+ |
first guest | QUANTITY | 0.98+ |
500 companies | QUANTITY | 0.97+ |
one basket | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
around 1.3 million | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
20 15 | DATE | 0.97+ |
one | QUANTITY | 0.96+ |
single line | QUANTITY | 0.96+ |
end of December last year | DATE | 0.96+ |
S3 | TITLE | 0.96+ |
ORGANIZATION | 0.96+ | |
DevOps | TITLE | 0.96+ |
S3 | COMMERCIAL_ITEM | 0.95+ |
Tansu | ORGANIZATION | 0.95+ |
Minio | PERSON | 0.94+ |
two foundations | QUANTITY | 0.94+ |
Azure | TITLE | 0.92+ |
a day | QUANTITY | 0.9+ |
OpenStack | TITLE | 0.9+ |
AVX five | COMMERCIAL_ITEM | 0.9+ |
this morning | DATE | 0.89+ |
ORGANIZATION | 0.88+ | |
first | QUANTITY | 0.88+ |
vCenter | TITLE | 0.87+ |
COVID | OTHER | 0.86+ |
Hudu | ORGANIZATION | 0.86+ |
billion dollar | QUANTITY | 0.86+ |
DACA | TITLE | 0.85+ |
Garima Kochhar, Dell EMC | Dell EMC: Get Ready For AI 2018
(upbeat electronic music) >> Hey welcome back everybody, Jeff Frick here at theCUBE, we are on the ground in Austin, Texas on a really special field trip that we're excited to be here, the Dell EMC high performance computing and machine learning innovation labs, and they've got every type of configuration of hardware, software, so this where they put it together. They test all the configs, pre-build solutions and really design off the moon solutions for the customer. We're excited to have with us our next guest, she's Garima Kochhar, she's on the technical staff and the senior principle engineer at Dell EMC, welcome. >> Thank you. >> What a cool place that you work here. >> That's right, that's right. >> Flashing lights, tons of drives, every kind of potential hardware configuration that you guys could ever put together >> Exaclty, I've been on this team 14 years and now you can tell why I'm still here, so much to do and so much to learn. >> So it's a big day we're talking about really, the AI kind of labor of this lab and the machine learning from before it's really always been a high performance computing lab what is the, from your perspective what's kind of the changing in the landscape from high performance computing which has been around for a long time into more of the AI and machine learning and deep learning and stuff we hear about much more business context today? >> Right, so you're right this lab's been around for awhile and we've been primarily focused on the high performance computing piece and we've added in AI. High performance computing has applicability across a broad range of industries so not just nation labs and super computers but commercial space as well, and our lab we've done a lot of that work in the last several years. And then the deep learning algorithms, those are also been around for decades but what we are finding right now is that the algorithms and the hardware of the technologies available have hit that perfect point along with industries' interest with the amount of data we have, to make it more what we would call mainstream, right, where more and more people are talking about it, every resume you see has a DI deep learning written on it. So it's not that, AI is something net new and deep learning is something completely net new, it's that today's technologies allow us to use them. And because we have a lot of experience doing really elaborate solutions, doing complicated solutions, it was a natural fit to develop Dell EMC's AI, deep learning machine learning solutions in this lab. >> Right so it's a really interesting point that, the tipping point of all these technologies seems to be happening at the same time so we've got really fast CPU's, we've got GPU's now coming on scene, we were just at the Google Cloud where they talked about Tensor CPU's so a lot of action there, a lot of excitement on the networking side, of course 5G's coming on the mobile space in a very short period of time which is a total game changer and you guys are really planted. And then of course I forget to mention, the solid-state drives and getting away from spinning disks which really opens up another kind of level of performance but it's funny 'cause all systems, they ultimately find the bottleneck so as you've kind of seen the evolution of all these different pieces, how have you seen that kind of bottleneck move? And how does the elimination of all these bottlenecks enable some of the solutions you guys are working on today? >> So you are absolutely right, so when we talk about systems or when we talk about solutions for high performance computing or AI, we're not talking about Oh, here's this GPU car or here's this Zion CPU and this is the best thing that you need, right? Our job in this lab, our goal is to bring new technologies into the lab from inside of Dell, as well as from all our partners and evaluate those new technologies, see how they fit well together because the final solution is comprised of multiple different pieces coming together and being interoperable. So putting these new technologies together, vetting them out to see which one's ready for the market, which ones still needs more work and improved upon type of things, putting these things together, designing systems, building them, doing evaluations. So doing benchmarks, running applications, doing a whole bunch of best practices and tuning and doing this not just with all our partners but also with our customers. So you know this lab is set up for a more access for our customers, putting all of this together to find the right solution not just for toy in a sense but for real world used cases. >> Right. >> So you might find that you have use cases where you do not need the highest speed internet connect or you do not need the fastest CPU or the best GPU and you need a balance say with, memory bandwidth or solid-state device or MBME like you were saying. And our charter is to build the right solutions for specific workloads and that's what we do here. >> Right and this is really interesting take 'cause you build it from benchmarking, everybody wants a good benchmark, so you can build on optimum solutions but ultimately you want to build industry solutions and then even subset of that, you invite customers in to optimize for what their particular workflow, or their particular business case which may not match the perfect benchmark spec at all right? >> That's exactly right and so that's the reason this lab is set up for customer access because we understand the benchmarking but you want to see, you know what is my experience with this? How does my code work and it allows us to learn from our customers of course and it allows them to get comfortable with Dell technologies, to work directly with the engineers and the experts, so that we can be their true partners and trusted advisors and help them advance their research, their science, their business goals. >> Right, then as you said it's not only the kind of the cutting edge stuff, whether it's new CPU's or GPU's but it's also all the, kind of minutia that makes a rack a rack. >> Yes. >> It's all the connectors and all these things that can have a fail if they're not properly speced or they become that unfortunate bottleneck. So you guys build the whole rack out right, not just the fun shiny new toys. >> Yeah you're right, so typically you know when something fails it fails spectacularly right? So I'm sure you've heard horror stories where there was equipment on the dock and it wouldn't fit in the elevator, or things like that right? So there are lots of other teams that handle, of course Dell's really good at this, the logistics piece of it but even within the lab when you walk around the lab, you see our racks are set up with power meters, so we do power measurements. Whatever best practices and tuning we come up with we feed that into our factories, so if you buy a solution say targeted for HPC, it would come with different vars tuning option than, a regular oracle database boatload. We have this integration into our software deployment metals so when you have racks and racks of equipment or one rack of equipment or maybe even three servers, and you're doing an installation are all the pieces all baked in already and everything is easy, seamless, easy to operate. So our idea is, the more that we can do in building integrated solutions that are simple to use and performant the less time our customers and their technical computing and IT departments have to spend worrying about the equipment and they can focus on their unique and specific use case. >> Right, and then the other little piece that you didn't mention but it's a really important piece of the puzzle, is you guys have a services arm as well. So you can take the time, spec it out and then you have services capability to help implement it, hook it up connect it to their data sources, do the integrations, et cetera. Which can't forget about that piece. >> You're absolutely right, we're an engineering lab which is why, it's really messy right, if you look at the racks, if you look at the work we do, we're a working lab, we're an engineering lab, we're a product development lab. And of course we have a support arm, we have a services arm and sometimes we're working with net new technologies, we conduct training in the lab for our services and support people, but we're an engineering organization. So when customers come into the lab and work with us they work with it from a engineering point of view, not from a pre-sales point of view or a services point of view. >> Right, I'm just curious, how long are some of those engagements when a team of customer engineers comes to work with you guys, say on a specific iteration of a solution that you build? Is that a week long process, days long, how long does those kind of engagements typically take? >> Right, so we set up typically for remote access, sometimes we have customers saying, hey can we come over for a week and spend time with you, speak to the different engineers, so sometimes the engagements are as short as two or three days because they know exactly what they want to do, they want us to set up an account and they want to run tests and then we discuss and analyze results. Sometimes it can be as long as three or four weeks, depending on the scope of the project and sometimes it's not just the customers logging in and doing stuff, it's us working with them or our team running their codes. So then there's more back and forth as well. >> So what's interesting, so the scope of today is all talking about the AI portion of the lab, but as you said you've had this for HPC and you've got a bunch of kind of core, I don't want to call it old school but old school infrastructure apps you've got Oracle in here and you've got SAP running so how do you, what's the benefit of having the experience in this sort of broader, set of applications as you can apply it to some of the newer more exciting things around AI machine learning and deep learning? >> Right so the fact that we are a shared lab, a bulk of this lab is high performance computing and AI but there's lot of other technologies and solutions we work on over here and there's other labs in the building that we have colleagues in as well. The first thing is, that the technology building blocks for several of of these solutions are similar, right? So when you're looking at storage alleys, when you're looking at Linux kernels, when you're looking at net pro cards, or solid-state drives or MBME, several of the building block technologies are similar and so when we find interoperability issues, which you would think there would never be any problems, you throw all these things together, they're always working. >> Of course. (laughs) >> Right, so when you sometimes rarely find the interoperability issue, that issue can affect multiple solutions and so we share those best practices because we engineers sit next to each other and we discuss things with each other, we're part of the larger organization. Similarly, when you find tuning options and nuances and power meters for performance or for energy efficiency, those also apply across different domains, so why you might think of Oracle as something that's been done for years with every iteration of technology there's new learning and that applies broadly across anybody using enterprise infrastructure. >> Right, so I'd just love to get your perspective as you come to work everyday, what excites you to take what was it's not the domain exclusively of big universities and feds but a lot it was there, to start to apply AI machine learning to such a broad swath of applications? What gets you excited, what are some of the things that you see like I'm so excited, now apply this horsepower to some of these problems out there. >> Right so that's a really good point right because most of time when you're trying to describe what you do it's hard to make everybody understand, well not what you're doing, but sometimes the deep technology's hard to explain what's the actual value of this and so a lot of work we're doing in terms of, excess skill it's to grow like, the human body of knowledge forward, to grow the signs happening in each country moving that forward and that's kind of at the higher reign when you talk about national labs and defense and everybody understands that needs to be done. But when you find that you're social media is doing some face recognition, everybody experiences that and everybody see that. And when you are trying to describe, we're all talking about driverless cars or we're all talking about, oh it took me so long because I had this insurance claim and then I had to get an appointment with the appraiser and they had to come in, I mean those are actual real world use cases where some of these technologies are going to apply. So even industries where you didn't think of them as being leading edge on the technical forefront in terms of IT, infrastructure and digital transformation, in every one of these places you're going to have an impact of what you do. Whether it's drug discovery, or whether it's next generation gene sequencing or whether it's designing the next car. Pick your favorite car or when you are flying in a aircraft the engineers who were designing the engine and the blades and the routers for that craft, we're using technologies that you've worked with. And so now it's everywhere, everywhere you go, we talked about 5G and IOT and edge computing, I mean we all work on this collectively so it's our work. >> Right, okay so last question before I let you go, just having the resources to bear in terms of being in your position to do the work when you've got the massive resources now behind you at Dell, the merger of EMC all the subset brands, there's so many brands, how does that help you do your job better? What does that let you do here in this lab that probably a lot of other people can't do? >> Yeah exactly, so when you're building complex solutions there's no one company that makes every single piece of it, but the tighter that things work together the better that they work together and that's directly through all the technologies that we have in the Dell technologies umbrella and with Dell EMC, and that's because of our super close relationships with our partners that allows us to build these solutions that are painless for our customers and our users, so that's the advantage we bring this lab and our company. >> Alright well thank you for taking a few minutes, your passion shines through. >> Thank you. Alright she's Garima I'm Jeff, we're at the Dell EMC high performance computing and artificial intelligence innovation labs, thanks for watching. (upbeat music)
SUMMARY :
and really design off the moon solutions for the customer. Exaclty, I've been on this team 14 years and now you can are talking about it, every resume you see has a DI enable some of the solutions you guys are working on today? So you know this lab is set up for a more access for and you need a balance say with, memory bandwidth or the benchmarking but you want to see, you know Right, then as you said it's not only the kind of So you guys build the whole rack out right, So our idea is, the more that we can do in building the puzzle, is you guys have a services arm as well. the racks, if you look at the work we do, engineers, so sometimes the engagements are as short as Right so the fact that we are a shared lab, Right, so when you sometimes rarely find as you come to work everyday, what excites you to take and the routers for that craft, we're using technologies so that's the advantage we bring this lab and our company. Alright well thank you for taking a few minutes, Alright she's Garima I'm Jeff, we're at the Dell EMC
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Garima Kochhar | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Garima | PERSON | 0.99+ |
14 years | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
four weeks | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
first thing | QUANTITY | 0.98+ |
each country | QUANTITY | 0.98+ |
one rack | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Zion | ORGANIZATION | 0.95+ |
2018 | DATE | 0.94+ |
three servers | QUANTITY | 0.93+ |
decades | QUANTITY | 0.86+ |
a week | QUANTITY | 0.85+ |
theCUBE | ORGANIZATION | 0.85+ |
Linux kernels | TITLE | 0.77+ |
every single piece | QUANTITY | 0.75+ |
ORGANIZATION | 0.73+ | |
last several years | DATE | 0.68+ |
tons | QUANTITY | 0.66+ |
one | QUANTITY | 0.64+ |
5G | OTHER | 0.62+ |
SAP | TITLE | 0.6+ |
Cloud | TITLE | 0.47+ |
Dell EMC: Get Ready For AI
(bright orchestra music) >> Hi, I'm Peter Burris. Welcome to a special digital community event brought to you by Wikibon and theCUBE. Sponsored by Dell EMC. Today we're gonna spend quite some time talking about some of the trends in the relationship between hardware and AI. Specifically, we're seeing a number of companies doing some masterful work incorporating new technologies to simplify the infrastructure required to take full advantage of AI options and possibilities. Now at the end of this conversation, series of conversations, we're gonna run a CrowdChat, which will be your opportunity to engage your peers and engage thought leaders from Dell EMC and from Wikibon SiliconANGLE and have a broader conversation about what does it mean to be better at doing AI, more successful, improving time to value, et cetera. So wait 'til the very end for that. Alright, let's get it kicked off. Tom Burns is my first guest. And he is the Senior Vice President and General Manager of Networking Solutions at Dell EMC. Tom, it's great to have you back again. Welcome back to theCUBE. >> Thank you very much. It's great to be here. >> So Tom, this is gonna be a very, very exciting conversation we're gonna have. And it's gonna be about AI. So when you go out and talk to customers specifically, what are you hearing then as they describe their needs, their wants, their aspirations as they pertain to AI? >> Yeah, Pete, we've always been looking at this as this whole digital transformation. Some studies say that about 70% of enterprises today are looking how to take advantage of the digital transformation that's occurring. In fact, you're probably familiar with the Dell 2030 Survey, where we went out and talked to about 400 different companies of very different sizes. And they're looking at all these connected devices and edge computing and all the various changes that are happening from a technology standpoint, and certainly AI is one of the hottest areas. There's a report I think that was co-sponsored by ServiceNow. Over 62% of the CIO's and the Fortune 500 are looking at AI as far as managing their business in the future. And it's really about user outcomes. It's about how do they improve their businesses, their operations, their processes, their decision-making using the capability of compute coming down from a class perspective and the number of connected devices exploding bringing more and more data to their companies that they can use, analyze, and put to use cases that really make a difference in their business. >> But they make a difference in their business, but they're also often these use cases are a lot more complex. They're not, we have this little bromide that we use that the first 50 years of computing were about known process, unknown technology. We're now entering into an era where we know a little bit more about the technology. It's gonna be cloud-like, but we don't know what the processes are, because we're engaging directly with customers or partners in much more complex domains. That suggests a lot of things. How are customers dealing with that new level of complexity and where are they looking to simplify? >> You actually nailed it on the head. What's happening in our customers' environment is they're hiring these data scientists to really look at this data. And instead of looking at analyzing the data that's being connected, that's being analyzed and connected, they're spending more time worried about the infrastructure and building the components and looking about allocations of capacity in order to make these data scientists productive. And really, what we're trying to do is help them get through that particular hurdle. So you have the data scientists that are frustrated, because they're waiting for the IT Department to help them set up and scale the capacity that they need and infrastructure that they need in order to do their job. And then you got the IT Departments that are very frustrated, because they don't know how to manage all this infrastructure. So the question around do I go to the cloud? Do I remain on-prem? All of this is things that our companies, our customers, are continuing to be challenged with. >> Now, the ideal would be that you can have a cloud experience but have the data reside where it most naturally resides, given physics, given the cost, given bandwidth limitations, given regulatory regimes, et cetera. So how are you at Dell EMC helping to provide that sense of an experience based on what the work load is and where the data resides, as opposed to some other set of infrastructure choices? >> Well, that's the exciting part is that we're getting ready to announce a new solution called the Ready Solutions for AI. And what we've been doing is working with our customers over the last several years looking at these challenges around infrastructure, the data analytics, the connected devices, but giving them an experience that's real-time. Not letting them worry about how am I gonna set this up or management and so forth. So we're introducing the Ready Solutions for AI, which really focuses on three things. One is simplify the AI process. The second thing is to ensure that we give them deep and real-time analytics. And lastly, provide them the level of expertise that they need in a partner in order to make those tools useful and that information useful to their business. >> Now we want to not only provide AI to the business, but we also wanna start utilizing some of these advanced technologies directly into the infrastructure elements themselves to make it more simple. Is that a big feature of what the ready system for AI is? >> Absolutely, as I said, one of the key value propositions is around making AI simple. We are experts at building infrastructure. We have IP around compute, storage, networking, infinity band. The things that are capable of putting this infrastructure together. So we have tested that based upon customers' input, using traditional data analytics, libraries, and tool sets that the data scientists are gonna use, already pre-tested and certified. And then we're bringing this to them in a way which allows them through a service provisioning portal to basically set up and get to work much faster. The previous tools that were available out there, some from our competition. There were 15, 20, 25 different steps just to log on, just to get enough automation or enough capability in order to get the information that they need. The infrastructure allocated for this big data analytics through this service portal we've actually gotten it down to around five clicks with a very user-friendly GUI, no CLI required. And basically, again, interacting with the tools that they're used to immediately right out of the gate like in stage three. And then getting them to work in stage four and stage five so that they're not worried about the infrastructure, not worried about capacity, or is it gonna work. They basically are one, two, three, four clicks away, and they're up and working on the analytics that everyone wants them to work on. And heaven knows, these guys are not cheap. >> So you're talking about the data scientists. So presumably when you're saying they're not worried about all those things, they're also not worried about when the IT Department can get around to doing it. So this gives them the opportunity to self-provision. Have I got that right? >> That's correct. They don't need the IT to come in and set up the network to do the CLI for the provisioning, to make sure that there is enough VM's or workloads that are properly scheduled in order to give them the capacity that they need. They basically are set with a preset platform. Again, let's think about what Dell EMC is really working towards and that's becoming the infrastructure provider. We believe that the silos, the service storage, and networking are becoming eliminated, that companies want a platform that they can enable those capabilities. So you're absolutely right. The part about the simplicity or simplifying the AI process is really giving the data scientists the tools they need to provision the infrastructure they need very quickly. >> And so that means that the AI or rather the IT group can actually start acting more like a DevOps organization as opposed to a specialist in one or another technology. >> Correct, but we've also given them the capability by giving the usual automation and configuration tools that they're used to coming from some of our software partners, such as Cloudera. So in other words, you still want the IT Department involved, making sure that the infrastructure is meeting the requirements of the users. They're giving them what they want, but we're simplifying the tools and processes around the IT standpoint as well. >> Now we've done a lot of research into what's happening in the big data now is likely to happen in the AI world. And a lot of the problems that companies had with big data was they conflated or they confused the objectives, the outcome of a big data project, with just getting the infrastructure to work. And they walked away often, because they failed to get the infrastructure to work. So it sounds though what you're doing is you're trying to take the infrastructure out of the equation while at the same time going back to the customer and saying, "Wherever you want this job "to run or this workload to run, you're gonna get the same "experience irregardless." >> Correct, but we're gonna get an improved experience as well. Because of the products that we've put together in this particular solution, combined with our compute, our scale-out mass solution from a storage perspective, our partnership with Mellon Oshman infinity band or ethernet switch capability. We're gonna give them deeper insights and faster insights. The performance and scalability of this particular platform is tremendous. We believe in certain benchmark studies based upon the Reznik 50 benchmark. We've performed anywhere between two and half to almost three times faster than the competition. In addition from a storage standpoint, all of these workloads, all of the various characteristics that happen, you need a ton of IOPS. >> Yeah. >> And there's no one in the industry that has the IOP performance that we have with our All-Flash Isilon product. The capabilities that we have there we believe are somewhere around nine times the competition. Again, the scale-out performance while simplifying the overall architecture. >> Tom Burns, Senior Vice President of Networking and Solutions at Dell EMC. Thanks for being on theCUBE. >> Thank you very much. >> So there's some great points there about this new class of technology that dramatically simplifies how hardware can be deployed to improve the overall productivity and performance of AI solutions. But let's take a look at a product demo. >> Every week, more customers are telling us they know AI is possible for them, but they don't know where to start. Much of the recent progress in AI has been fueled by open source software. So it's tempting to think that do-it-yourself is the right way to go. Get some how-to references from the web and start building out your own distributive deep-learning platform. But it takes a lot of time and effort to create an enterprise-class AI platform with automation for deployment, management, and monitoring. There is no easy solution for that. Until now. Instead of putting the burden of do-it-yourself on your already limited staff, consider Dell EMC Ready Solutions for AI. Ready Solutions are complete software and hardware stacks pre-tested and validated with the most popular open source AI frameworks and libraries. Our professional services with proven AI expertise will have the solution up and running in days and ready for data scientists to start working in weeks. Data scientists will find the Dell EMC data science provisioning portal a welcome change for managing their own hardware and software environments. The portal lets data scientists acquire hardware resources from the cluster and customize their software environment with packages and libraries tested for compatibility with all dependencies. Data scientists choose between JupyterHub notebooks for interactive work, as well as terminal sessions for large-scale neural networks. These neural networks run across a high-performance cluster of power-edge servers with scalable Intel processors and scale-out Isilon storage that delivers up to 18 times the throughput of its closest all-flash competitor. IT pros will experience that AI is simplified as Bright Cluster Manager monitors your cluster for configuration drift down to the server BIOS using exclusive integration with Dell EMC's open manage API's for power-edge. This solution provides comprehensive metrics along with automatic health checks that keep an eye on the cluster and will alert you when there's trouble. Ready Solutions for AI are the only platforms that keep both data center professionals and data scientists productive and getting along. IT operations are simplified and that produces a more consistent experience for everyone. Data scientists get a customizable, high-performance, deep-learning service experience that can eliminate monthly charges spent on public cloud while keeping your data under your control. (upbeat guitar music) >> It's always great to see the product videos, but Tom Burns mentioned something earlier. He talked about the expansive expertise that Dell EMC has and bringing together advanced hardware and advanced software into more simple solutions that can liberate business value for customers, especially around AI. And so to really test that out, we sent Jeff Frick, who's the general manager and host of theCUBE down to the bowels of Dell EMC's operations in Austin, Texas. Jeff went and visited the Dell EMC HPC and AI Innovation Lab and met with Garima Kochhar, who's a tactical staff Senior Principal Engineer. Let's hear what Jeff learned. >> We're excited to have with us our next guest. She's Garima Kochhar. She's on the tactical staff and the Senior Principal Engineer at Dell EMC. Welcome. >> Thank you. >> From your perspective what kinda changing in the landscape from high-performance computing, which has been around for a long time, into more of the AI and machine learning and deep learning and stuff we hear about much more in business context today? >> High-performance computing has applicability across a broad range industries. So not just national labs and supercomputers, but commercial space as well. And our lab, we've done a lot of that work in the last several years. And then the deep learning algorithms, those have also been around for decades. But what we are finding right now is that the algorithms and the hardware, the technologies available, have hit that perfect point, along with industries' interest with the amount of data we have to make it more, what we would call, mainstream. >> So you can build an optimum solution, but ultimately you wanna build industry solutions. And then even subset of that, you invite customers in to optimize for what their particular workflow or their particular business case which may not match the perfect benchmark spec at all, right? >> That's exactly right. And so that's the reason this lab is set up for customer access, because we do the standard benchmarking. But you want to see what is my experience with this, how does my code work? And it allows us to learn from our customers, of course. And it allows them to get comfortable with their technologies, to work directly with the engineers and the experts so that we can be their true partners and trusted advisors and help them advance their research, their science, their business goals. >> Right. So you guys built the whole rack out, right? Not just the fun shiny new toys. >> Yeah, you're right. So typically, when something fails, it fails spectacularly. Right, so I'm you've heard horror stories where there was equipment on the dock and it wouldn't fit in the elevator or things like that, right? So there are lots of other teams that handle, of course Dell's really good at this, the logistics piece of it, but even within the lab. When you walk around the lab, you'll see our racks are set up with power meters. So we do power measurements. Whatever best practices in tuning we come up with, we feed that into our factories. So if you buy a solution, say targeted for HPC, it will come with different BIOS tuning options than a regular, say Oracle, database workload. We have this integration into our software deployment methods. So when you have racks and racks of equipment or one rack of equipment or maybe even three servers, and you're doing an installation, all the pieces are baked-in already and everything is easy, seamless, easy to operate. So our idea is... The more that we can do in building integrated solutions that are simple to use and performant, the less time our customers and their technical computing and IT Departments have to spend worrying about the equipment and they can focus on their unique and specific use case. >> Right, you guys have a services arm as well. >> Well, we're an engineering lab, which is why it's really messy, right? Like if you look at the racks, if you look at the work we do, we're a working lab. We're an engineering lab. We're a product development lab. And of course, we have a support arm. We have a services arm. And sometimes we're working with new technologies. We conduct training in the lab for our services and support people, but we're an engineering organization. And so when customers come into the lab and work with us, they work with it from an engineering point of view not from a pre-sales point of view or a services point of view. >> Right, kinda what's the benefit of having the experience in this broader set of applications as you can apply it to some of the newer, more exciting things around AI, machine learning, deep learning? >> Right, so the fact that we are a shared lab, right? Like the bulk of this lab is High Performance Computing and AI, but there's lots of other technologies and solutions we work on over here. And there's other labs in the building that we have colleagues in as well. The first thing is that the technology building blocks for several of these solutions are similar, right? So when you're looking at storage arrays, when you're looking at Linux kernels, when you're looking at network cards, or solid state drives, or NVMe, several of the building block technolgies are similar. And so when we find interoperability issues, which you would think that there would never be any problems, you throw all these things together, they always work like-- >> (laughs) Of course (laughs). >> Right, so when you sometimes, rarely find an interoperability issue, that issue can affect multiple solutions. And so we share those best practices, because we engineers sit next to each other and we discuss things with each other. We're part of the larger organization. Similarly, when you find tuning options and nuances and parameters for performance or for energy efficiency, those also apply across different domains. So while you might think of Oracle as something that it's been done for years, with every iteration of technology there's new learning and that applies broadly across anybody using enterprise infrastructure. >> Right, what gets you excited? What are some of the things that you see, like, "I'm so excited that we can now apply "this horsepower to some of these problems out there?" >> Right, so that's a really good point, right? Because most of the time when you're trying to describe what you do, it's hard to make everybody understand. Well, not what you're doing, right? But sometimes with deep technology it's hard to explain what's the actual value of this. And so a lot of work we're doing in terms of excess scale, it's to grow like the... Human body of knowledge forward, to grow the science happening in each country moving that forward. And that's kind of, at the higher end when you talk about national labs and defense and everybody understands that needs to be done. But when you find that your social media is doing some face recognition, everybody experiences that and everybody sees that. And when you're trying to describe the, we're all talking about driverless cars or we're all talking about, "Oh, it took me so long, "because I had this insurance claim and then I had "to get an appointment with the appraisor "and they had to come in." I mean, those are actual real-world use cases where some of these technologies are going to apply. So even industries where you didn't think of them as being leading-edge on the technical forefront in terms of IT infrastructure and digital transformation, in every one of these places you're going to have an impact of what you do. >> Right. >> Whether it's drug discovery, right? Or whether it's next-generation gene sequencing or whether it's designing the next car, like pick your favorite car, or when you're flying in an aircraft the engineers who were designing the engine and the blades and the rotors for that craft were using technologies that you worked with. And so now it's everywhere, everywhere you go. We talked about 5G and IoT and edge computing. >> Right. >> I mean, we all work on this collectively. >> Right. >> So it's our world. >> Right. Okay, so last question before I let you go. Just being, having the resources to bear, in terms of being in your position, to do the work when you've got the massive resources now behind you. You have Dell, the merger of EMC, all the subset brands, Isilon, so many brands. How does that help you do your job better? What does that let you do here in this lab that probably a lot of other people can't do? >> Yeah, exactly. So when you're building complex solutions, there's no one company that makes every single piece of it, but the tighter that things work together the better that they work together. And that's directly through all the technologies that we have in the Dell technologies umbrella and with Dell EMC. And that's because of our super close relationships with our partners that allows us to build these solutions that are painless for our customers and our users. And so that's the advantage we bring. >> Alright. >> This lab and our company. >> Alright, Garima. Well, thank you for taking a few minutes. Your passion shines through. (laughs) >> Thank you. >> I really liked hearing about what Dell EMC's doing in their innovation labs down at Austin, Texas, but it all comes together for the customer. And so the last segment that we wanna bring you here is a great segment. Nick Curcuru, who's the Vice President of Big Data Analytics at Mastercard is here to talk about how some of these technologies are coming together to speed value and realize the potential of AI at Mastercard. Nick, welcome to theCUBE. >> Thank you for letting me be here. >> So Mastercard, tell us a little bit about what's going on at Mastercard. >> There's a lot that's going on with Mastercard, but I think the most exciting things that we're doing out of Mastercard right now is with artificial intelligence and how we're bringing the ability for artificial intelligence to really allow a seamless transition when someone's actually doing a transaction and also bringing a level of security to our customers and our banks and the people that use Mastercards. >> So AI to improve engagement, provide a better experience, but that's a pretty broad range of things. What specifically kinds of, when you think about how AI can be applied, what are you looking to? Especially early on. >> Well, let's actually take a look at our core business, which is being able to make sure that we can secure a payment, right? So at this particular point, people are used to, we're applying AI to biometrics. But not just a fingerprint or a facial recognition, but actually how you interact with your device. So you think of like the Internet of Things and you're sitting back saying, "I'm using, "I'm swiping my device, my mobile device, "or how I interact with a keyboard." Those are all key signatures. And we, with our company, new data that we've just acquired are taking that capability to create a profile and make that a part of your signature. So it's not just beyond a fingerprint. It's not just beyond a facial. It's actually how you're interacting so that we know it's you. >> So there's a lot of different potential sources of information that you can utilize, but AI is still a relatively young technology and practice. And one of the big issues for a lot of our clients is how do you get time to value? So take us through, if you would, a little bit about some of the challenges that Mastercard and anybody would face to try to get to that time to value. >> Well, what you're really seeing is looking for actually a good partner to be with when you're doing artificial intelligence, because again, at that particular point, you try to get to scale. For us, it's always about scale. How can we roll this across 220 countries? We're 165 million transactions per hour, right? So what we're looking for is a partner who also has that ability to scale. A partner who has the global presence, who's learning. So that's the first step. That's gonna help you with your time to value. The other part is actually sitting back and actually using those particular partners to bring their expertise that they're learning to combine with yours. It's no longer just silos. So when we talk about artificial intelligence, how can we be learning from each other? Those open source systems that are out there, how do we learn from that community? It's that community that allows you to get there. Again, those that are trying to do it on their own, trying to do it by themselves, they're not gonna get to the point where they need to be. In other words, in a six month time to value it's gonna take them years. We're trying to accelerate that, you say, "How can we get out of those algorithms operating for us "the way we need them to provide the experiences "that people want quickly." And that's with good partners. >> 165 million transactions per hour is only likely to go up over the course of the next few years. That creates an operational challenge. AI is associated with a probabilistic set of behaviors as opposed to categorical. Little bit more difficult to test, little bit more difficult to verify, how is the introduction of some of these AI technologies impacting the way you think about operations at Mastercard? >> Well, for the operations, it's actually when you take a look there's three components, right? There's right there on the edge. So when someone's interacting and actually doing the transaction, and then we'll look at it as we have a core. So that core sits there, right? Basically, that's where you're learning, right? And then there's actually, what we call, the deep learning component of it. So for us, it's how can we move what we need to have in the core and what we need to have on the edge? So the question for us always is we want that algorithm to be smart. So what three to four things do we need that algorithm to be looking for within that artificial intelligence needs to know that it then goes back into the core and retrieves something, whether that's your fingerprint, your biometrics, how you're interacting with that machine, to say, "Yes, that's you. "Yes, we want that transaction to go through." Or, "No, stop it before it even begins." It's that interaction and operational basis that we're always have a dynamic tension with, but it's how we get from the edge to the core. And it's understanding what we need it to do. So we're breaking apart what we have to have that intelligence to be able to create a decision for us. So that's how we're trying to manage it, as well as of course, the hardware that goes with it and the tools that we need in order to make that happen. >> When we get on the hardware just a little bit, so that historically different applications put pressure on different components within a stack. One of the observations that we've made is that the transition from spinning disk to flash allows companies like Mastercard to think about just persisting data to actually delivering data. >> Yeah. >> Much more rapidly. How does some of the, how does these AI technologies, what kinda new pressures do they put on storage? >> Well, they put a tremendous pressure, because that's actually again, the next tension or dynamics that you have to play with. So what do you wanna have on disk? What do you need flash to do? Again, if you look at some people, everyone's like, "Oh, flash will take over everything." It's like no, flash has, there's a reason for it to exist, and understanding what that reason is and understanding, "Hey, I need that to be able to do this "in sub-seconds, nanoseconds," I've heard the term before. That's what you're asking flash to do. When you want deep learning, that, I want it on disk. I want to be taking all those millions of billions of transactions that we're gonna see and learn from them. All the ways that people will be trying to attack me, right? The bad guys, how am I learning from everything that I'm having that can sit there on disk and let it continue to run, that's the deep learning. The flash is when I wanna create a seamless transaction with a customer, or a consumer, or from a business to business. I need to have that decision now. I need to know it is you who is trying to swipe or purchase something with my mobile device or through the, basically through the Internet. Or how am I actually even swiping or inserting, tipping my card in that particular machine at a merchant. That's we're looking at how we use flash. >> So you're looking at perhaps using older technologies or different classes technologies for some of the training elements, but really moving to flash for the interfacing piece where you gotta deliver the real-time effort right now. >> And that's the experience. And that's what you're looking for. And that's you're looking, you wanna be able to make sure you're making those distinctions. 'Cause again there's no longer one or the other. It's how they interact. And again, when you look at your partners, it's the question now is how are they interacting? Am I actually, has this been done at scale somewhere else? Can you help me understand how I need to deploy this so that I can reduce my time to value, which is very, very important to create that seamless, frictionless transaction we want our consumers to have. >> So Nick, you talked about how you wanna work with companies that demonstrate that they have expertise, because you can't do it on your own. Companies that are capable of providing the scale that you need to provide. So just as we talk about how AI is placing pressure on different parts of the technology stack, it's got also to be putting pressure on the traditional relationships you have with technology suppliers. What are you looking for in suppliers as you think about these new classes of applications? >> Well, the part is you're looking at, for us it's do you have that scale that we're looking at? Have you done this before, that global scale? Again, in many cases you can have five guys in a garage that can do great things, but where has it been tested? When we say tested, it's not just, "Hey, we did this "in a pilot." We're talking it's gotta be robust. So that's one thing that you're looking for. You're looking for also a partner we can bring, for us, additional information that we don't have ourselves, right? In many cases, when you look at that partner they're gonna bring something that they're almost like they are an adjunct part of your team. They are your bench strength. That's what we're looking for when we look at it. What expertise do you have that we may not? What are you seeing, especially on the technology front, that we're not privy to? What are those different chips that are coming out, the new ways we should be handling the storage, the new ways the applications are interacting with that? We want to know from you, because again, everyone's, there's a talent, competition for talent, and we're looking for a partner who has that talent and will bring it to us so that we don't have to search it. >> At scale. >> Yeah, especially at scale. >> Nick Curcuro, Mastercard. Thanks for being on theCUBE. >> Thank you for having me. >> So there you have a great example of what leading companies or what a leading company is doing to try to take full advantage of the possibilities of AI by utilizing infrastructure that gets the job done simpler, faster, and better. So let's imagine for a second how it might affect your life. Well, here's your opportunity. We're now gonna move into the CrowdChat part of the event, and this is your chance to ask peers questions, provide your insights, tell your war stories. Ultimately, to interact with thought leaders about what it means to get ready for AI. Once again, I'm Peter Burris, thank you for watching. Now let's jump into the CrowdChat.
SUMMARY :
Tom, it's great to have you back again. It's great to be here. So when you go out and talk to customers specifically, and certainly AI is one of the hottest areas. that the first 50 years of computing So the question around do I go to the cloud? Now, the ideal would be that you can have Well, that's the exciting part is that we're getting ready into the infrastructure elements themselves And then getting them to work in stage four and stage five So this gives them the opportunity to self-provision. They don't need the IT to come in and set up the network And so that means that the AI or rather the IT group involved, making sure that the infrastructure in the big data now is likely to happen in the AI world. Because of the products that we've put together the IOP performance that we have and Solutions at Dell EMC. can be deployed to improve the overall productivity on the cluster and will alert you when there's trouble. And so to really test that out, we sent Jeff Frick, We're excited to have with us our next guest. and the hardware, the technologies available, So you can build an optimum solution, And so that's the reason this lab is set up So you guys built the whole rack out, right? So when you have racks and racks of equipment And of course, we have a support arm. Right, so the fact that we are a shared lab, right? So while you might think of Oracle as something And that's kind of, at the higher end when you talk and the blades and the rotors for that craft Just being, having the resources to bear, And so that's the advantage we bring. Well, thank you for taking a few minutes. And so the last segment that we wanna bring you here So Mastercard, tell us a little bit for artificial intelligence to really allow So AI to improve engagement, provide a better experience, are taking that capability to create a profile of information that you can utilize, but AI is still that they're learning to combine with yours. impacting the way you think about operations at Mastercard? Well, for the operations, it's actually when you is that the transition from spinning disk what kinda new pressures do they put on storage? I need to know it is you who is trying to swipe for the interfacing piece where you gotta deliver so that I can reduce my time to value, on the traditional relationships you have the new ways we should be handling the storage, Thanks for being on theCUBE. that gets the job done simpler, faster, and better.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Tom Burns | PERSON | 0.99+ |
Garima Kochhar | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
Nick Curcuru | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Garima | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
Pete | PERSON | 0.99+ |
five guys | QUANTITY | 0.99+ |
Mastercard | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Mellon Oshman | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
220 countries | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
six month | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
each country | QUANTITY | 0.99+ |
first 50 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
first guest | QUANTITY | 0.98+ |
AI Innovation Lab | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
one rack | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Over 62% | QUANTITY | 0.97+ |
second thing | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Nick Curcuro | PERSON | 0.97+ |
about 70% | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Dell EMC HPC | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
three components | QUANTITY | 0.96+ |
half | QUANTITY | 0.95+ |
about 400 different companies | QUANTITY | 0.95+ |
three servers | QUANTITY | 0.94+ |
Intel | ORGANIZATION | 0.94+ |
around five clicks | QUANTITY | 0.93+ |
JupyterHub | ORGANIZATION | 0.93+ |
Big Data Analytics | ORGANIZATION | 0.93+ |
decades | QUANTITY | 0.93+ |
today | DATE | 0.92+ |
25 different steps | QUANTITY | 0.92+ |
Vice President | PERSON | 0.92+ |
up to 18 times | QUANTITY | 0.92+ |
three things | QUANTITY | 0.91+ |
around nine times | QUANTITY | 0.91+ |
four | QUANTITY | 0.89+ |