Image Title

Search Results for Paul Gillum:

Jay Boisseau, Dell Technologies | SuperComputing 22


 

>>We are back in the final stretch at Supercomputing 22 here in Dallas. I'm your host Paul Gillum with my co-host Dave Nicholson, and we've been talking to so many smart people this week. It just, it makes, boggles my mind are next guest. J Poso is the HPC and AI technology strategist at Dell. Jay also has a PhD in astronomy from the University of Texas. And I'm guessing you were up watching the Artemis launch the other night? >>I, I wasn't. I really should have been, but, but I wasn't, I was in full super computing conference mode. So that means discussions at, you know, various venues with people into the wee hours. >>How did you make the transition from a PhD in astronomy to an HPC expert? >>It was actually really straightforward. I did theoretical astrophysics and I was modeling what white dwarfs look like when they create matter and then explode as type one A super Novi, which is a class of stars that blow up. And it's a very important class because they blow up almost exactly the same way. So if you know how bright they are physically, not just how bright they appear in the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when they go off in a galaxy, you know how far the galaxy is about how faint it is. So to model these though, you had to understand equations of physics, including electron degeneracy pressure, as well as normal fluid dynamics kinds of of things. And so you were solving for an explosive burning front, ripping through something. And that required a supercomputer to have anywhere close to the fat fidelity to get a reasonable answer and, and hopefully some understanding. >>So I've always said electrons are degenerate. I've always said it and I, and I mentioned to Paul earlier, I said, finally we're gonna get a guest to consort through this whole dark energy dark matter thing for us. We'll do that after, after, after the segment. >>That's a whole different, >>So, well I guess super computing being a natural tool that you would use. What is, what do you do in your role as a strategist? >>So I'm in the product management team. I spend a lot of time talking to customers about what they want to do next. HPC customers are always trying to be maximally productive of what they've got, but always wanting to know what's coming next. Because if you think about it, we can't simulate the entire human body cell for cell on any supercomputer day. We can simulate parts of it, cell for cell or the whole body with macroscopic physics, but not at the, you know, atomic level, the entire organism. So we're always trying to build more powerful computers to solve larger problems with more fidelity and less approximations in it. And so I help people try to understand which technologies for their next system might give them the best advance in capabilities for their simulation work, their data analytics work, their AI work, et cetera. Another part of it is talking to our great technology partner ecosystem and learning about which technologies they have. Cause it feeds the first thing, right? So understanding what's coming, and Dell has a, we're very proud of our large partner ecosystem. We embrace many different partners in that with different capabilities. So understanding those helps understand what your future systems might be. That those are two of the major roles in it. Strategic customers and strategic technologies. >>So you've had four days to wander the, this massive floor here and lots of startups, lots of established companies doing interesting things. What have you seen this week that really excites you? >>So I'm gonna tell you a dirty little secret here. If you are working for someone who makes super computers, you don't get as much time to wander the floor as you would think because you get lots of meetings with people who really want to understand in an NDA way, not just in the public way that's on the floor, but what's, what are you not telling us on the floor? What's coming next? And so I've been in a large number of customer meetings as well as being on the floor. And while I can't obviously share the everything, that's a non-disclosure topic in those, some things that we're hearing a lot about, people are really concerned with power because they see the TDP on the roadmaps for all the silicon providers going way up. And so people with power comes heat as waste. And so that means cooling. >>So power and cooling has been a big topic here. Obviously accelerators are, are increasing in importance in hpc not just for AI calculations, but now also for simulation calculations. And we are very proud of the three new accelerator platforms we launched here at the show that are coming out in a quarter or so. Those are two of the big topics we've seen. You know, there's, as you walk the floor here, you see lots of interesting storage vendors. HPC community's been do doing storage the same way for roughly 20 years. But now we see a lot of interesting players in that space. We have some great things in storage now and some great things that, you know, are coming in a year or two as well. So it's, it's interesting to see that diversity of that space. And then there's always the fun, exciting topics like quantum computing. We unveiled our first hybrid classical quantum computing system here with I on Q and I can't say what the future holds in this, in this format, but I can say we believe strongly in the future of quantum computing and that this, that future will be integrated with the kind of classical computing infrastructure that we make and that will help make quantum computing more powerful downstream. >>Well, let's go down that rabbit hole because, oh boy, boy, quantum computing has been talked about for a long time. There was a lot of excitement about it four or five years ago, some of the major vendors were announcing quantum computers in the cloud. Excitement has kind of died down. We don't see a lot of activity around, no, not a lot of talk around commercial quantum computers, yet you're deep into this. How close are we to have having a true quantum computer or is it a, is it a hybrid? More >>Likely? So there are probably more than 20 and I think close to 40 companies trying different approaches to make quantum computers. So, you know, Microsoft's pursuing a topol topological approach, do a photonics based approach. I, on Q and i on trap approach. These are all different ways of trying to leverage the quantum properties of nature. We know the properties exist, we use 'em in other technologies. We know the physics, but trying the engineering is very difficult. It's very difficult. I mean, just like it was difficult at one point to split the atom. It's very difficult to build technologies that leverage quantum properties of nature in a consistent and reliable and durable way, right? So I, you know, I wouldn't wanna make a prediction, but I will tell you I'm an optimist. I believe that when a tremendous capability with, with tremendous monetary gain potential lines up with another incentive, national security engineering seems to evolve faster when those things line up, when there's plenty of investment and plenty of incentive things happen. >>So I think a lot of my, my friends in the office of the CTO at Dell Technologies, when they're really leading this effort for us, you know, they would say a few to several years probably I'm an optimist, so I believe that, you know, I, I believe that we will sell some of the solution we announced here in the next year for people that are trying to get their feet wet with quantum. And I believe we'll be selling multiple quantum hybrid classical Dell quantum computing systems multiple a year in a year or two. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade >>When people talk about, I'm talking about people writ large, super leaders in supercomputing, I would say Dell's name doesn't come up in conversations I have. What would you like them to know that they don't know? >>You know, I, I hope that's not true, but I, I, I guess I understand it. We are so good at making the products from which people make clusters that we're number one in servers, we're number one in enterprise storage. We're number one in so many areas of enterprise technology that I, I think in some ways being number one in those things detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. But, you know, depending on which analyst you talk to and how they count, we're number one or number two in the world in supercomputing revenue. We don't always do the biggest splashy systems. We do the, the frontier system at t, the HPC five system at ENI in Europe. That's the largest academic supercomputer in the world and the largest industrial super >>That's based the world on Dell. Dell >>On Dell hardware. Yep. But we, I think our vision is really that we want to help more people use HPC to solve more problems than any vendor in the world. And those problems are various scales. So we are really concerned about the more we're democratizing HPC to make it easier for more people to get in at any scale that their budget and workloads require, we're optimizing it to make sure that it's not just some parts they're getting, that they are validated to work together with maximum scalability and performance. And we have a great HPC and AI innovation lab that does this engineering work. Cuz you know, one of the myths is, oh, I can just go buy a bunch of servers from company X and a network from company Y and a storage system from company Z and then it'll all work as an equivalent cluster. Right? Not true. It'll probably work, but it won't be the highest performance, highest scalability, highest reliability. So we spend a lot of time optimizing and then we are doing things to try to advance the state of HPC as well. What our future systems look like in the second half of this decade might be very different than what they look like right. Now. >>You mentioned a great example of a limitation that we're running up against right now. You mentioned an entire human body as a, as a, as an organism >>Or any large system that you try to model at the atomic level, but it's a huge macro system, >>Right? So will we be able to reach milestones where we can get our arms entirely around something like an entire human organism with simply quantitative advances as opposed to qualitative advances? Right now, as an example, let's just, let's go down to the basics from a Dell perspective. You're in a season where microprocessor vendors are coming out with next gen stuff and those next NextGen microprocessors, GPUs and CPUs are gonna be plugged into NextGen motherboards, PCI e gen five, gen six coming faster memory, bigger memory, faster networking, whether it's NS or InfiniBand storage controllers, all bigger, better, faster, stronger. And I suspect that systems like Frontera, I don't know, but I suspect that a lot of the systems that are out there are not on necessarily what we would think of as current generation technology, but maybe they're n minus one as a practical matter. So, >>But yeah, I mean they have a lifetime, so Exactly. >>The >>Lifetime is longer than the evolution. >>That's the normal technologies. Yeah. So, so what some people miss is this is, this is the reality that when, when we move forward with the latest things that are being talked about here, it's often a two generation move for an individual, for an individual organization. Yep. >>So now some organizations will have multiple systems and they, the system's leapfrog and technology generations, even if one is their real large system, their next one might be newer technology, but smaller, the next one might be a larger one with newer technology and such. Yeah. So the, the biggest super computing sites are, are often running more than one HPC system that have been specifically designed with the latest technologies and, and designed and configured for maybe a different subset of their >>Workloads. Yeah. So, so the, the, to go back to kinda the, the core question, in your opinion, do we need that qualitative leap to something like quantum computing in order to get to the point, or is it simply a question of scale and power at the, at the, at the individual node level to get us to the point where we can in fact gain insight from a digital model of an entire human body, not just looking at a, not, not just looking at an at, at an organ. And to your point, it's not just about human body, any system that we would characterize as being chaotic today, so a weather system, whatever. Do you, are there any milestones that you're thinking of where you're like, wow, you know, I have, I, I understand everything that's going on, and I think we're, we're a year away. We're a, we're, we're a, we're a compute generation away from being able to gain insight out of systems that right now we can't simply because of scale. It's a very, very long question that I just asked you, but I think I, but hopefully, hopefully you're tracking it. What, what are your, what are your thoughts? What are these, what are these inflection points that we, that you've, in your mind? >>So I, I'll I'll start simple. Remember when we used to buy laptops and we worried about what gigahertz the clock speed was Exactly. Everybody knew the gigahertz of it, right? There's some tasks at which we're so good at making the hardware that now the primary issues are how great is the screen? How light is it, what's the battery life like, et cetera. Because for the set of applications on there, we we have enough compute power. We don't, you don't really need your laptop. Most people don't need their laptop to have twice as powerful a processor that actually rather up twice the battery life on it or whatnot, right? We make great laptops. We, we design for all of those, configure those parameters now. And what, you know, we, we see some customers want more of x, somewhat more of y but the, the general point is that the amazing progress in, in microprocessors, it's sufficient for most of the workloads at that level. Now let's go to HPC level or scientific and technical level. And when it needs hpc, if you're trying to model the orbit of the moon around the earth, you don't really need a super computer for that. You can get a highly accurate model on a, on a workstation, on a server, no problem. It won't even really make it break a sweat. >>I had to do it with a slide rule >>That, >>That >>Might make you break a sweat. Yeah. But to do it with a, you know, a single body orbiting with another body, I say orbiting around, but we both know it's really, they're, they're both ordering the center of mass. It's just that if one is much larger, it seems like one's going entirely around the other. So that's, that's not a super computing problem. What about the stars in a galaxy trying to understand how galaxies form spiral arms and how they spur star formation. Right now you're talking a hundred billion stars plus a massive amount of inter stellar medium in there. So can you solve that on that server? Absolutely not. Not even close. Can you solve it on the largest super computer in the world today? Yes and no. You can solve it with approximations on the largest super computer in the world today. But there's a lot of approximations that go into even that. >>The good news is the simulations produce things that we see through our great telescopes. So we know the approximations are sufficient to get good fidelity, but until you really are doing direct numerical simulation of every particle, right? Right. Which is impossible to do. You need a computer as big as the universe to do that. But the approximations and the science in the science as well as the known parts of the science are good enough to give fidelity. So, and answer your question, there's tremendous number of problem scales. There are problems in every field of science and study that exceed the der direct numerical simulation capabilities of systems today. And so we always want more computing power. It's not macho flops, it's real, we need it, we need exo flops and we will need zeta flops beyond that and yada flops beyond that. But an increasing number of problems will be solved as we keep working to solve problems that are farther out there. So in terms of qualitative steps, I do think technologies like quantum computing, to be clear as part of a hybrid classical quantum system, because they're really just accelerators for certain kinds of algorithms, not for general purpose algorithms. I do think advances like that are gonna be necessary to solve some of the very hardest problem. It's easy to actually formulate an optimization problem that is absolutely intractable by the larger systems in the world today, but quantum systems happen to be in theory when they're big and stable enough, great at that kind of problem. >>I, that should be understood. Quantum is not a cure all for absolutely. For the, for the shortage of computing power. It's very good for certain, certain >>Problems. And as you said at this super computing, we see some quantum, but it's a little bit quieter than I probably expected. I think we're in a period now of everybody saying, okay, there's been a lot of buzz. We know it's gonna be real, but let's calm down a little bit and figure out what the right solutions are. And I'm very proud that we offered one of those >>At the show. We, we have barely scratched the surface of what we could talk about as we get into intergalactic space, but unfortunately we only have so many minutes and, and we're out of them. Oh, >>I'm >>J Poso, HPC and AI technology strategist at Dell. Thanks for a fascinating conversation. >>Thanks for having me. Happy to do it anytime. >>We'll be back with our last interview of Supercomputing 22 in Dallas. This is Paul Gillen with Dave Nicholson. Stay with us.

Published Date : Nov 18 2022

SUMMARY :

We are back in the final stretch at Supercomputing 22 here in Dallas. So that means discussions at, you know, various venues with people into the wee hours. the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when We'll do that after, after, after the segment. What is, what do you do in your role as a strategist? We can simulate parts of it, cell for cell or the whole body with macroscopic physics, What have you seen this week that really excites you? not just in the public way that's on the floor, but what's, what are you not telling us on the floor? the kind of classical computing infrastructure that we make and that will help make quantum computing more in the cloud. We know the properties exist, we use 'em in other technologies. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade What would you like them to know that they don't know? detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. That's based the world on Dell. So we are really concerned about the more we're You mentioned a great example of a limitation that we're running up against I don't know, but I suspect that a lot of the systems that are out there are not on That's the normal technologies. but smaller, the next one might be a larger one with newer technology and such. And to your point, it's not just about human of the moon around the earth, you don't really need a super computer for that. But to do it with a, you know, a single body orbiting with another are sufficient to get good fidelity, but until you really are doing direct numerical simulation I, that should be understood. And as you said at this super computing, we see some quantum, but it's a little bit quieter than We, we have barely scratched the surface of what we could talk about as we get into intergalactic J Poso, HPC and AI technology strategist at Dell. Happy to do it anytime. This is Paul Gillen with Dave Nicholson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Paul GillumPERSON

0.99+

Jay BoisseauPERSON

0.99+

PaulPERSON

0.99+

JayPERSON

0.99+

DallasLOCATION

0.99+

EuropeLOCATION

0.99+

J PosoPERSON

0.99+

DellORGANIZATION

0.99+

tensQUANTITY

0.99+

twoQUANTITY

0.99+

Paul GillenPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

University of TexasORGANIZATION

0.99+

firstQUANTITY

0.99+

fourDATE

0.99+

first principlesQUANTITY

0.99+

next yearDATE

0.99+

more than 20QUANTITY

0.99+

two generationQUANTITY

0.98+

Supercomputing 22TITLE

0.98+

one pointQUANTITY

0.98+

twiceQUANTITY

0.98+

hundredsQUANTITY

0.98+

todayDATE

0.97+

five years agoDATE

0.97+

bothQUANTITY

0.97+

earthLOCATION

0.96+

more than oneQUANTITY

0.96+

oneQUANTITY

0.96+

a yearQUANTITY

0.96+

this weekDATE

0.96+

first thingQUANTITY

0.95+

20 yearsQUANTITY

0.94+

four daysQUANTITY

0.93+

second half of this decadeDATE

0.93+

ENIORGANIZATION

0.91+

ZORGANIZATION

0.9+

40 companiesQUANTITY

0.9+

e gen fiveCOMMERCIAL_ITEM

0.86+

a yearQUANTITY

0.84+

hundred billion starsQUANTITY

0.83+

HPCORGANIZATION

0.83+

three new accelerator platformsQUANTITY

0.81+

end of the decadeDATE

0.8+

hpcORGANIZATION

0.8+

FronteraORGANIZATION

0.8+

single bodyQUANTITY

0.79+

XORGANIZATION

0.76+

NextGenORGANIZATION

0.73+

Supercomputing 22ORGANIZATION

0.69+

five systemQUANTITY

0.62+

gen sixQUANTITY

0.61+

number oneQUANTITY

0.57+

approximationsQUANTITY

0.53+

particleQUANTITY

0.53+

a quarterQUANTITY

0.52+

YORGANIZATION

0.49+

typeOTHER

0.49+

22OTHER

0.49+

Satish Iyer, Dell Technologies | SuperComputing 22


 

>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.

Published Date : Nov 17 2022

SUMMARY :

Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TerryPERSON

0.99+

Dave NicholsonPERSON

0.99+

AWSORGANIZATION

0.99+

Ian ColeyPERSON

0.99+

Dave VellantePERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GellPERSON

0.99+

DavidPERSON

0.99+

Paul GillumPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

190 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Max PetersonPERSON

0.99+

DellORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AfricaLOCATION

0.99+

oneQUANTITY

0.99+

Arcus GlobalORGANIZATION

0.99+

fourQUANTITY

0.99+

BahrainLOCATION

0.99+

D.C.LOCATION

0.99+

EvereeORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

JohnPERSON

0.99+

UKLOCATION

0.99+

four hoursQUANTITY

0.99+

USLOCATION

0.99+

DallasLOCATION

0.99+

Stu MinimanPERSON

0.99+

Zero DaysTITLE

0.99+

NASAORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

Department for Wealth and PensionsORGANIZATION

0.99+

IrelandLOCATION

0.99+

Washington, DCLOCATION

0.99+

an hourQUANTITY

0.99+

ParisLOCATION

0.99+

five weeksQUANTITY

0.99+

1.8 billionQUANTITY

0.99+

thousandsQUANTITY

0.99+

GermanyLOCATION

0.99+

450 applicationsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

AsiaLOCATION

0.99+

John WallsPERSON

0.99+

Satish IyerPERSON

0.99+

LondonLOCATION

0.99+

GDPRTITLE

0.99+

Middle EastLOCATION

0.99+

42%QUANTITY

0.99+

Jet Propulsion LabORGANIZATION

0.99+

Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22


 

>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.

Published Date : Nov 17 2022

SUMMARY :

Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Paul GillumPERSON

0.99+

DavePERSON

0.99+

Paul GillinPERSON

0.99+

October of 2000DATE

0.99+

PaulPERSON

0.99+

NASA Science FoundationORGANIZATION

0.99+

2001DATE

0.99+

BaltimoreLOCATION

0.99+

8,000QUANTITY

0.99+

14 universitiesQUANTITY

0.99+

31 yearsQUANTITY

0.99+

20 millionQUANTITY

0.99+

24 hoursQUANTITY

0.99+

last yearDATE

0.99+

Karen Tom CookPERSON

0.99+

60 studentsQUANTITY

0.99+

Ohio State UniversityORGANIZATION

0.99+

90 countriesQUANTITY

0.99+

sixQUANTITY

0.99+

EarthLOCATION

0.99+

PandaPERSON

0.99+

todayDATE

0.99+

65,000 studentsQUANTITY

0.99+

3,200 organizationsQUANTITY

0.99+

North AmericaLOCATION

0.99+

PythonTITLE

0.99+

United StatesLOCATION

0.99+

Dallas, TexasLOCATION

0.99+

over 500 papersQUANTITY

0.99+

JuneDATE

0.99+

OneQUANTITY

0.99+

more than 32 organQUANTITY

0.99+

120 applicationQUANTITY

0.99+

OhioLOCATION

0.99+

more than 3000 orangeQUANTITY

0.99+

first waysQUANTITY

0.99+

oneQUANTITY

0.99+

nine monthsQUANTITY

0.99+

40 PIsQUANTITY

0.99+

AsicsORGANIZATION

0.99+

MPI ForumORGANIZATION

0.98+

ChinaORGANIZATION

0.98+

TwoQUANTITY

0.98+

Ohio State State UniversityORGANIZATION

0.98+

8 billion peopleQUANTITY

0.98+

IntelORGANIZATION

0.98+

HPORGANIZATION

0.97+

Dr.PERSON

0.97+

over 20 yearsQUANTITY

0.97+

USORGANIZATION

0.97+

FinmanORGANIZATION

0.97+

RockyPERSON

0.97+

JapanORGANIZATION

0.97+

first timeQUANTITY

0.97+

first demonstrationQUANTITY

0.96+

31 years agoDATE

0.96+

Ohio Super CenterORGANIZATION

0.96+

three broad goalsQUANTITY

0.96+

one wishQUANTITY

0.96+

second partQUANTITY

0.96+

31QUANTITY

0.96+

CubeORGANIZATION

0.95+

eightQUANTITY

0.95+

over 31 yearsQUANTITY

0.95+

10,000 node clustersQUANTITY

0.95+

day threeQUANTITY

0.95+

firstQUANTITY

0.95+

INFINEVENT

0.94+

seven yearsQUANTITY

0.94+

Dhabaleswar “DK” PandaPERSON

0.94+

threeQUANTITY

0.93+

S f I instituteTITLE

0.93+

first thingQUANTITY

0.93+

Day 1 Wrap | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: theCUBE presents KubeCon and Cloud NativeCon Europe, 2022 brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain. A coverage of KubeCon, Cloud NativeCon, Europe, 2022. I'm Keith Townsend. Your host of theCUBE, along with Paul Gillum, Senior Editor Enterprise Architecture for Silicon Angle, Enrico, Senior IT Analyst for GigaOm . This has been a full day, 7,500 attendees. I might have seen them run out of food, this is just unexpected. I mean, it escalated from what I understand, it went from capping it off at 4,000 gold, 5,000 gold in it off finally at 7,500 people. I'm super excited for... Today's been a great dead coverage. I'm super excited for tomorrow's coverage from theCUBE, but first off, we'll let the the new person on stage take the first question of the wrap up of the day of coverage, Enrico, what's different about this year versus other KubeCons or Cloud Native conversations. >> I think in general, it's the maturity. So we talk a lot about day two operations, observability, monitoring, going deeper and deeper in the security aspects of the application. So this means that for many enterprises, Kubernetes is becoming real critical. They want to get more control of it. And of course you have the discussion around FinOps, around cost control, because we are deploying Kubernetes everywhere. And if you don't have everything optimized, control, monitored, costs go to the roof and think about deploying the Public Cloud . If your application is not optimized, you're paying more. But also in that, on-premises if you are not optimized, you don't have any clear idea what is going to happen. So capacity planning become the nightmare, that we know from the past. So there is a lot of going on around these topics, really exciting actually, less infrastructure, more application. That is what Kubernetes is in here. >> Paul help me separate some of the signal from the noise. There is a lot going on a lot of overlap. What are some of the big themes of takeaways for day one that Enterprise Architects, Executives, need to take home and really chew on? >> Well, the Kubernetes was a turning point. Docker was introduced nine years ago, and for the first three or four years it was an interesting technology that was not very widely adopted. Kubernetes came along and gave developers a reason to use containers. What strikes me about this conference is that this is a developer event, ordinarily you go to conferences and it's geared toward IT Managers, towards CIOs, this is very much geared toward developers. When you have the hearts and minds of developers the rest of the industry is sort of pulled along with it. So this is ground zero for the hottest area of the entire computing industry right now, is in this area building Distributed services, Microservices based, Cloud Native applications. And it's the developers who are leading the way. I think that's a significant shift. I don't see the Managers here, the CIOs here. These are the people who are pulling this industry into the next generation. >> One of the interesting things that I've seen when we've always said, Kubernetes is for the developers, but we talk with an icon from MoneyGram, who's a end user, he's an enterprise architect, and he brought Kubernetes to his front end developers, and they rejected it. They said, what is this? I just want to develop code. So when we say Kubernetes is for developers or the developers are here, how do we reconcile that mismatch of experience? We have Enterprise Architect here. I hear constantly that the Kubernetes is for developers, but is it a certain kind of developer that Kubernetes is for? >> Well, yes and no. I mean, so the paradigm is changing. Okay. So, and maybe a few years back, it was tough to understand how make your application different. So microservices, everything was new for everybody, but actually, everything has changed to a point and now the developer understands, is neural. So, going through the application, APIs, automation, because the complexity of this application is huge, and you have, 724 kind of development sort of deployment. So you have to stay always on, et cetera, et cetera. And actually, to the point of developers bringing this new generation of decision makers in there. So they are actually decision, they are adopting technology. Maybe it's a sort of shadow IT at the very beginning. So they're adopting it, they're using it. And they're starting to use a lot of open source stuff. And then somebody upper in the stack, the Executive, says what are... They discover that the technology is already in place is a critical component, and then it's transformed in something enterprise, meaning paying enterprise services on top of it to be sure support contract and so on. So it's a real journey. And these guys are the real decision makers, or they are at the base of the decision making process, at least >> Cloud Native is something we're going to learn to take for granted. When you remember back, remember the Fail Whale in the early days of Twitter, when periodically the service would just crash from traffic, or Amazon went through the same thing. Facebook went through the same thing. We don't see that anymore because we are now learning to take Cloud Native for granted. We assume applications are going to be available. They're going to be performant. They're going to scale. They're going to handle anything we throw at them. That is Cloud Native at work. And I think we forget sometimes how refreshing it is to have an internet that really works for you. >> Yeah, I think we're much earlier in the journey. We had Microsoft on, the Xbox team talked about 22,000 pods running Linkerd some of the initial problems and pain points around those challenges. Much of my hallway track conversation has been centered around as we talk about the decision makers, the platform teams. And this is what I'm getting excited to talk about in tomorrow's coverage. Who's on the ground doing this stuff. Is it developers as we see or hear or told? Or is it what we're seeing from the Microsoft example, the MoneyGram example, where central IT is getting it. And not only are they getting it, they're enabling developers to simply write code, build it, and Kubernetes is invisible. It seems like that's become the Holy Grail to make Kubernetes invisible and Cloud Native invisible, and the experience is much closer to Cloud. >> So I think that, it's an interesting, I mean, I had a lot of conversation in the past year is that it's not that the original traditional IT operations are disappearing. So it's just that traditional IT operation are giving resources to these new developers. Okay, so it's a sort of walled garden, you don't see the wall, but it's a walled garden. So they are giving you resources and you use these resources like an internal Cloud. So a few years back, we were talking about private Cloud, the private Cloud as let's say the same identical paradigm of the Public Cloud is not possible, because there are no infinite resources or well, whatever we think are infinite resources. So what you're doing today is giving these developers enough resources to think that they are unlimited and they can do automatic operationing and do all these kind of things. So they don't think about infrastructure at all, but actually it's there. So IT operation are still there providing resources to let developers be more free and agile and everything. So we are still in a, I think an interesting time for all of it. >> Kubernetes and Cloud Native in general, I think are blurring the lines, traditional lines development and operations always were separate entities. Obviously with DevOps, those two are emerging. But now we're moving when you add in shift left testing, shift right testing, DevSecOps, you see the developers become much more involved in the infrastructure and they want to be involved in infrastructure because that's what makes their applications perform. So this is going to cause, I think IT organizations to have to do some rethinking about what those traditional lines are, maybe break down those walls and have these teams work much closer together. And that should be a good thing because the people who are developing applications should also have intimate knowledge of the infrastructure they're going to run on. >> So Paul, another recurring theme that we've heard here is the impact of funding on resources. What have your discussions been around founders and creators when it comes to sourcing talent and the impact of the markets on just their day to day? >> Well, the sourcing talent has been a huge issue for the last year, of course, really, ever since the pandemic started. Interestingly, one of our guests earlier today said that with the meltdown in the tech stock market, actually talent has become more available, because people who were tied to their companies because of their stock options are now seeing those options are underwater and suddenly they're not as loyal to the companies they joined. So that's certainly for the startups, there are many small startups here, they're seeing a bit of a windfall now from the tech stock bust. Nevertheless, skills are a long term problem. The US educational system is turning out about 10% of the skilled people that the industry needs every year. And no one I know, sees an end to that issue anytime soon. >> So Enrico, last question to you. Let's talk about what that means to the practitioner. There's a lot of opportunity out there. 200 plus sponsors I hear, I think is worth the projects is 200 plus, where are the big opportunities as a practitioner, as I'm thinking about the next thing that I'm going to learn to help me survive the next 10 or 15 years of my career? Where you think the focus should be? Should it be that low level Cloud builder? Or should it be at those levels of extraction that we're seeing and reading about? >> I think that it's a good question. The answer is not that easy. I mean, being a developer today, for sure, grants you a salary at the end of the month. I mean, there is high demand, but actually there are a lot of other technical figures in the data center, in the Cloud, that could really find easily a job today. So, developers is the first in my mind also because they are more, they can serve multiple roles. It means you can be a developer, but actually you can be also with the new roles that we have, especially now with the DevOps, you can be somebody that supports operation because you know automation, you know a few other things. So you can be a sysadmin of the next generation even if you are a developer, even if when you start as a developer. >> KubeCon 2022, is exciting. I don't care if you're a developer, practitioner, a investor, IT decision maker, CIO, CXO, there's so much to learn and absorb here and we're going to be covering it for the next two days. Me and Paul will be shoulder to shoulder, I'm not going to say you're going to get sick of this because it's just, it's all great information, we'll help sort all of this. From Valencia, Spain. I'm Keith Townsend, along with my host Enrico Signoretti, Paul Gillum, and you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

the Cloud Native Computing Foundation of the wrap up of the day of coverage, of the application. of the signal from the noise. and for the first three or four years I hear constantly that the and now the developer understands, the early days of Twitter, and the experience is is that it's not that the of the infrastructure and the impact of the markets So that's certainly for the startups, So Enrico, last question to you. of the next generation it for the next two days.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillumPERSON

0.99+

Enrico SignorettiPERSON

0.99+

AmazonORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

PaulPERSON

0.99+

Valencia, SpainLOCATION

0.99+

last yearDATE

0.99+

7,500 attendeesQUANTITY

0.99+

EnricoPERSON

0.99+

Silicon AngleORGANIZATION

0.99+

4,000 goldQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

5,000 goldQUANTITY

0.99+

KubeConEVENT

0.99+

nine years agoDATE

0.99+

GigaOmORGANIZATION

0.99+

7,500 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

Cloud NativeConEVENT

0.98+

TodayDATE

0.98+

four yearsQUANTITY

0.98+

first questionQUANTITY

0.97+

this yearDATE

0.96+

200 plusQUANTITY

0.96+

KubernetesTITLE

0.96+

DevSecOpsTITLE

0.95+

Cloud NativeTITLE

0.95+

DevOpsTITLE

0.95+

about 10%QUANTITY

0.94+

first threeQUANTITY

0.94+

15 yearsQUANTITY

0.94+

KubeconORGANIZATION

0.93+

KubeCon 2022EVENT

0.93+

day oneQUANTITY

0.93+

OneQUANTITY

0.92+

TwitterORGANIZATION

0.92+

past yearDATE

0.92+

KubernetesPERSON

0.92+

724QUANTITY

0.91+

pandemicEVENT

0.91+

MoneyGramORGANIZATION

0.89+

XboxCOMMERCIAL_ITEM

0.89+

earlier todayDATE

0.89+

about 22,000 podsQUANTITY

0.89+

DockerTITLE

0.89+

DayQUANTITY

0.84+

LinkerdORGANIZATION

0.84+

2022DATE

0.83+

CloudTITLE

0.82+

EuropeLOCATION

0.81+

10QUANTITY

0.81+

200 plus sponsorsQUANTITY

0.8+

few years backDATE

0.78+

Cloud NativeCon EuropeEVENT

0.78+

EnricoORGANIZATION

0.77+

FinOpsTITLE

0.76+

USLOCATION

0.76+

a few years backDATE

0.74+

next two daysDATE

0.73+

KubernetesORGANIZATION

0.69+

theCUBEORGANIZATION

0.68+

day twoQUANTITY

0.67+

CloudnativeconORGANIZATION

0.58+

Public CloudTITLE

0.54+

2022EVENT

0.53+

Fail WhaleTITLE

0.52+

Day 1 Wrap Up | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain and coverage of Q con cloud native con Europe, 2022. I'm Keith Townsend. You're a host of the cube along with Paul Gillum, senior editor, enterprise architecture for Silicon angle, ENCO, senior ready, senior it analyst for giga own. Uh, this has been a full day, 7,500 attendees. I might have seen them run out of food. This is just unexpected. I mean, they, the, it escalated from what understand it went from four, capping it off to 4,000 gold, 5,000 gold in and off. Finally at 7,500 people. I'm super excited for, you know, today's been a great day of coverage. I'm super excited for tomorrow's coverage, uh, from the cube. But first off, we'll let the, the new person on stage take the, the first question of, of the wrap up of the day of coverage, UN Rico on Rico. What's different about this year versus other Q coupons or cloud native conversations. >>I, I think in general, it's the maturity. So we talk it a lot about day two operations, uh, observability monitoring, uh, going deeper and deeper in the security aspects of the application. So this means that for many enterprises, Kubernetes is becoming real critical. They want to, to get more control of it. And of course you have the discussion around Phen op around, you know, uh, cost control because we are deploying Kubernetes everywhere. And, and if you don't have everything optimized control, monitor it, you know, uh, cost to the roof and think about, uh, deploying the public cloud. If your application is not optimized, you're paying more, but also in the on premises, if you are not optimiz, you don't have the clear idea of what is going to happen. So capacity planning become the nightmare that we know from the past. So there is a lot of going on around these topics, uh, really exciting, actually less infrastructure, more replication. That is what Kubernetes is India. >>Paul help me separate some of the signal from the noise. Uh, there is a lot going on a lot of overlap. What are some of the big themes of takeaways for day one that enterprise architects executives need to take home and really chew >>On? Well, the Kubernetes was a turning point. You know, Docker was introduced nine years ago and for the first three or four years, it was an interesting technology that was not very widely adopted. Kubernetes came along and gave developers a reason to use containers. What strikes me about this conference is that this is a developer event, you know, ordinarily you go to conferences and it's geared toward it managers towards CIOs. This is very much geared toward developers when you have the hearts and minds of developers, the rest of the industry is sort of pulled along with it. So this is ground zero for the hottest, uh, the, the hottest area of the entire computing industry. Right now, I is in this area building distributed services, BA microservices based cloud native applications. And it's the developers who are leading the way. I think that's, that's a significant shift. I don't see the managers here, the CIOs here, these are the people who are, uh, who are pulling this industry into the next generation. >>Um, one of the interesting things that I've seen when we, you know, we've always said, Kubernetes is for the developers, but we talk with, uh, an icon from, uh, MoneyGram. Who's a end user, he's an enterprise architect. And he brought Kubernetes to his front end developers and they, they, they kind of rejected it. They said, what is this? I just wanna develop cold. So when we say Kubernetes is for developers, or the developers are here, where, how do we reconcile that mismatch of experience? We have enterprise architecture. I hear constantly that, that the, uh, Kubernetes is for developers, but is it a certain kind of developer that Kubernetes is for? >>Well, yes and no. I mean, so the paradigm is changing. Okay. So, and maybe a few years back, it was tough to understand how, you know, uh, uh, make your application different. So microservices, everything was new for everybody, but actually, so everything is changed to a point. Now, the developer understands, you know, it is neural. So, you know, going through the application APIs automation, because the complexity of this application is, is huge. And you have, you know, 7 24 kind of development, uh, sort of deployment. So you have to stay always on cetera, et cetera. And actually to the point of, you know, developers, uh, you know, bringing this new generation of, uh, decision makers in India. So they are actually decision, they are adopting technology. Maybe it's a sort of shadow it at the very beginning. So they're adopting it, they're using it. And they're starting to use a lot of open source stuff. And then somebody upper in the stack, the executive says, what are, yeah, they, they discover that the technology is already in place is, uh, is a critical component. And then it's, uh, you know, uh, transformed in something enterprise, meaning, you know, paying enterprise services on top of it to be sure con uh, contract and so on. So it's a real journey. And these are, these guys are the real decision makers. Oh, they are at the base of the decision making process. At least >>Cloud native is something we're gonna learn to take for granted. You know, when you remember back, remember the fail whale in the early days of Twitter, when periodically the service would just would just, uh, um, crash from, uh, from, uh, traffic or Amazon went through the same thing. Facebook went through the same thing. We don't see that anymore because we are now learning to take cloud native for granted. We assume applications are gonna be available. They're gonna be performant. They're gonna scale. They're gonna handle anything. We throw at them that is cloud native at work. And I think we, we forget sometimes how refreshing it is to have, uh, an internet that really works for you. >>Yeah. I, I think we're much earlier in the journey. You know, we have Microsoft, uh, on the Xbox team talked about 22,000 pods running ni D some of the initial problems and pain points of, uh, around those challenges. Uh, much of my hallway track conversation has been centered around as we talk about kind of the decision makers, the platform teams. And this is what I'm getting excited to talk about in tomorrow's coverage. Who's on the ground doing this stuff. Is it developers as we are, as, as we see or hear or told, or is it what we're seeing from the Microsoft example, the MoneyGram example where central it is kind of getting it, and not only are they getting it, they're enabling developers to, to simply write code, build it. And Kubernetes is invisible. It seems like that's become the holy grill to make Kubernetes invisible cloud native invisible, and the experience is much closer to cloud. >>So I, I think that, uh, um, it's an interesting, I mean, I had a lot of conversation in the past year is that it's not that the original, you know, traditional it operations are disappearing. So it's just that, uh, traditional it operation are giving resources to these new developers. Okay. So it's a, it's a sort of walled garden. You don't see the wall, but it's a walled garden. So they are giving you resources and you use these resources like an internal cloud. So a few years back, we were talking about private cloud, the private cloud, as, you know, as a, let's say, uh, the same identical paradigm of, of the public cloud. This is not possible because there are no infinite resources or, well, whatever we, we think are infinite resources. So what you're doing today is giving these developers enough resources to think that they are unlimited and they can, uh, do automatic provisioning and do all these kind of things. So they don't think about infrastructure at all, but actually it's there. So it operation are still there providing resources to let developers be more free and agile and everything. So we are still in a, I think in an interesting time for all of it, >>Kubernetes and cloud native in general, I think are blurring the lines, traditional lines development and operations always were separate entities, obviously through with DevOps. Those two are emerging, but now we're moving. When you add in shift left testing shift, right? Testing, uh, dev SecOps, you see the developers become much more involved in the infrastructure and they want to be involved in infrastructure because that's what makes their applications perform. So this is gonna, cause I think it organizations to have, do some rethinking about what those traditional lines are, maybe break down those walls and have these teams work, work much closer together. And that should be a good thing because the people who are developing applications should also have intimate knowledge of the infrastructure they're gonna run on. >>So Paul, another recurring theme that we've heard here is the impact of funding on resources. What have you, what have your discussions been around founders and creators when it comes to sourcing talent and the impact of the markets on just their day to day? >>Well, the sourcing talent has been a huge issue for the last year. Of course, really ever since the pandemic started interesting. We, uh, one of our, our guests earlier today said that with the meltdown in the tech stock market, actually talent has become more available because people who were tied to their companies because of their, their stock options are now seeing those options are underwater. And suddenly they're not as loyal to the companies they joined. So that's certainly for the, for the startups. Uh, there are many small startups here. Um, they're seeing a bit of a windfall now from the, uh, from the tech stock, uh, bust, um, nevertheless skills are a long term problem. The us, uh, educational system is turning out about 10% of the skilled people that the industry needs every year. And no one I know, sees an end to that issue anytime soon. >>So ENGO, last question to you, let's talk about what that means to the practitioner. There's a lot of opportunity out >>There. >>200 plus sponsors I hear here I think is, or the projects is 200 plus, where are the big opportunities as a practitioner, as I'm thinking about the next thing that I'm going to learn to help me survive the next 10 or 15 years of my career? Where, where do you think the focus should be? Should it be that low level, uh, cloud builder, or should it be at those Le levels of extraction that we're seeing and reading about? >>I, I think, I think that, uh, you know, it's, uh, it's a good question. The, the answer is not that easy. I mean, uh, being a developer today, for sure grants, you, you know, uh, a salary at the end of the month, I mean, there is high demand, but actually there are a lot of other technical, uh, figures in, in the, in, uh, in the data center in the cloud that could, you know, really find easily a job today. So developers is the first in my mind also because they are more, uh, they, they can serve multiple roles. It means you can be a developer, but actually you can be also, you know, with the new roles that we have, especially now with the DevOps, you can be, uh, somebody that supports operation because, you know, automation, you know, a few other things. So you can be a C admin of the next generation, even if you're a developer, even if when you start as a developer, >>Cuan 20, 22 is exciting. I don't care if you're a developer practitioner, a investor, a, uh, it decision maker is CIO CXO. They're so much to learn and absorb here and we're going to be covering it for the next two days. Me and Paul will be shoulder to shoulder. We will, you, I'm not gonna say you're gonna get sick of this because it's just, you know, it's all great information. We'll, we'll, we'll help sort all of this from Valencia Spain. I'm Keith Townsend, along with my host ENCO senior, the Paul Gillon. And you're watching the, you, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. You're a host of the cube along with Paul So capacity planning become the nightmare that we know from the past. Paul help me separate some of the signal from the noise. And it's the developers who are leading the way. Um, one of the interesting things that I've seen when we, you know, we've always said, Now, the developer understands, you know, it is the early days of Twitter, when periodically the service would just would just, uh, um, Who's on the ground doing this stuff. So they are giving you resources and you use these resources like an internal cloud. So this is gonna, cause I think it organizations to have, do some rethinking about what those traditional and the impact of the markets on just their day to day? 10% of the skilled people that the industry needs every year. So ENGO, last question to you, let's talk about what that means to the practitioner. is the first in my mind also because they are more, uh, they, they can serve multiple roles. the Paul Gillon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillumPERSON

0.99+

Keith TownsendPERSON

0.99+

PaulPERSON

0.99+

AmazonORGANIZATION

0.99+

ENCOORGANIZATION

0.99+

IndiaLOCATION

0.99+

FacebookORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

5,000 goldQUANTITY

0.99+

4,000 goldQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

7,500 peopleQUANTITY

0.99+

7,500 attendeesQUANTITY

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

tomorrowDATE

0.99+

KubernetesTITLE

0.99+

Paul GillonPERSON

0.99+

todayDATE

0.99+

first questionQUANTITY

0.98+

nine years agoDATE

0.98+

KubeconORGANIZATION

0.98+

2022DATE

0.98+

Silicon angleORGANIZATION

0.98+

Valencia SpainLOCATION

0.98+

200 plusQUANTITY

0.97+

past yearDATE

0.96+

CoonORGANIZATION

0.96+

UN RicoORGANIZATION

0.96+

pandemicEVENT

0.96+

15 yearsQUANTITY

0.95+

TwitterORGANIZATION

0.95+

this yearDATE

0.95+

200 plus sponsorsQUANTITY

0.94+

XboxCOMMERCIAL_ITEM

0.94+

CloudnativeconORGANIZATION

0.93+

about 10%QUANTITY

0.93+

oneQUANTITY

0.93+

first threeQUANTITY

0.93+

earlier todayDATE

0.91+

DevOpsTITLE

0.9+

MoneyGramORGANIZATION

0.89+

DockerTITLE

0.89+

KubernetesORGANIZATION

0.89+

EuropeLOCATION

0.88+

four yearsQUANTITY

0.86+

day oneQUANTITY

0.85+

next two daysDATE

0.82+

10QUANTITY

0.81+

few years backDATE

0.78+

about 22,000 podsQUANTITY

0.77+

DayQUANTITY

0.73+

ENGOORGANIZATION

0.7+

cloud native conORGANIZATION

0.68+

day twoQUANTITY

0.68+

Cuan 20PERSON

0.68+

cloud native computingORGANIZATION

0.67+

RicoLOCATION

0.67+

CXOORGANIZATION

0.67+

22EVENT

0.62+

MoneyGramTITLE

0.57+

24OTHER

0.53+

Q con cloudORGANIZATION

0.52+

conORGANIZATION

0.52+

fourQUANTITY

0.51+

22QUANTITY

0.43+