Image Title

Search Results for Hadoop:

Dell EMC AI Lab Tour | Dell EMC: Get Ready For AI


 

(upbeat music) >> Thank you for coming to the HBCN AI Innovation Lab. So, I'm sure that you've heard a lot of excitement in the industry about what we can do with AI and machine learning and deep learning. And our team in our lab has been building solutions for this space. So, very similar to what we do with our other solutions, including high performance computing where we take servers, storage, networking, software, and put it all together to build and design targeted solutions for a particular use case and then bring in services and support along with that, so that we have a complete product. That's what we're doing for the AI space, as well. So, whether we're doing with machine learning, algorithms and whether your data, say for example in Hadoop, or whether your doing deep learning, convolution neural networks, R&M. And no matter what technology you're using, right? So, you have different choices for compute, that those compute choices can be CPUs, GPUs, FPGAs, custom ASICs. There's all sorts of different choices for compute. Similarly you have a lot of different choices for networking, for storage, and your actual use case. Right, are you doing image recognition, fraud detection, what are you trying to do? So our goal is multiple form. First, we want to bring in all these new technologies, all these different technologies, see how they work well together. Specifically in the AI space, we want to make sure that we have the right software framework. Because of a big piece of putting these solutions together is making sure that your MXNet and CAP B, and Tensorflow, and all these frameworks are working well together, along with all these different neural network models. So putting all these things together are making sure that we can run standard benchmark datasets so we can do comparisons across configurations, and then as a result of all that work, share best practices and tuning. Including the storage piece as well. Our top 500 cluster is over here, so multiple racks, this is a cluster that is more that 500 servers today, so around 560 servers. And on the latest top 500 list, which is a list that's published twice a year of the 500 fastest supercomputers in the world. We started with a smaller number of CPUs. We had 128 servers. And then we added more servers, we swapped over to the next generation of CPUs, then we added even more servers, and now we have the latest generation Intel CPUs in this cluster. One of the questions we've been getting more and more, is what do you see with liquid cooling? So, Dell has had the capability to do liquid cooled systems for a while now, but we recently added this capability into factory as well. So you can order systems that are direct contact liquid cooled directly from factory. Let's compare the two, right? Right over here, you have an air cooled rack. Here we have the exact same configuration, so the same compute infrastructure, but with liquid cool. The CPU has a cold plate on it, and that's cooled with facilities water. So these pipes actually have water flowing through them, and so each sled has two pipes coming out of it, for the water loop, and these pipes from each server, each sled, go into these rack manifolds, and at the bottom of the rack over there, is where we have our heat exchanger. In our early studies, we have seen that, your efficiency in terms of how much performance you get out of the server, should not matter whether you're air cooled or liquid cooled, if you're air cooling solution can provide enough cooling for your components. So, what they means is, if you have a well air cooled solution, it's not going to perform any worse than a liquid cooled solution. What the liquid cooling allows you to do is in the same rack space, put in a higher level configuration, higher TDP processors, more disks, a configuration that you say cannot adequately air cool, that configuration in the same space in your data center with the same air flow, you will be able to liquid cool. The biggest advantage of liquid cooling today, is to do with PUE ratios. So how much of your infrastructure power are you using for compute and your infrastructure versus for cooling and power. This is production, this is part of the cluster. What we are doing right now is we are running rack level studies, right? So we've done single chassis studies in our thermal lab along with our thermal engineers on the advantages of liquid cooling and what we can do and how it works for our particular workloads. But now we have a rack level solution, and so we are running different types of workloads, manufacturing workloads, weather simulation, some AI workloads, standard high performance, linpack benchmarks, on an entire rack of liquid cooled, an entire rack of air cooled, all these racks have metered PDUs, where we can measure power, so we're going to measure power consumption as well, and then we have sensors which will allow us to measure temperature, and then we can tell you the whole story. And of course, we have a really, you know, phenomenal group of people in our thermal team, our architects, and we also have the ability to come in and evaluate a data center to see, does liquid cooling make sense for you today. It's not a one size fits all, and liquid cooling is what everybody must do and you must do it today, no. It's a, and that's the value of this lab, right? Actual quantitative results, for liquid cooling, for all our technologies, for all our solutions, so that we can give you the right configuration, right optimizations, with the data backing it up for the right decision for you, instead of forcing you into the one solution that we do have. So now we're actually standing right in the middle of our Zenith super computers, so all the racks around you are Zenith. You can hear that the noise level is higher, that's because this is one cluster, it's running workload right now, both from our team and our engineers, as well as from customers who can get access into the lab and run their workload. So that noise level you hear, is an actual super computer, we have C6420 servers in here today, with the Intel Xeon scalable family processors, and that's what you see in these racks behind you and in front of you. And this cluster is interconnected using the Omnipath interconnect. There are thousands and thousands of applications in the HPC space, and over the years we've added more and more capability. So today in the lab we do a lot of work with manufacturing applications, that's computational fluid dynamic, CFDs, CAE, structural mechanics, you know, things like that. We do a lot of work with life sciences, that's next generation sequencing applications, molecular dynamics, cryogenic electron microscopy, we do weather simulation applications, and a whole bunch more. Quantum chromo dynamics, we do a whole bunch of benchmarking of subsystems. So tests, for compute, for network, for memory, for storage, we do a lot of parify systems, and I/O tests, and when I talk about application benchmarking, we're doing that across different compute, network, and storage to see what the full picture looks like. The list that I've given you, is not a complete list. This switch is an Dell Network H-Series switch, which supports the Omnipath fabric, the Omnipath interconnect, that today runs at a hundred gigabits per second. What you have is all the clusters, all the Zenith servers in the lab, are connected to this switch. Because we started with a few number of servers and then scaled, we knew we were going to grow. We chose to start with a director class switch, which allowed us to add leaf modules as we grew. So the servers, the racks, that are closest to the switch have copper cables, the ones that are coming from across the lab have our fiber cables. So, you know, this switch is what allows us to call this HPC cluster, where we have a high-speed interconnect for our parallel and distributed computations, and a lot of our current deep learning work is being done on this cluster as well on the Intel Xeon side. (upbeat music)

Published Date : Aug 7 2018

SUMMARY :

and then we can tell you the whole story.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
thousandsQUANTITY

0.99+

two pipesQUANTITY

0.99+

128 serversQUANTITY

0.99+

twoQUANTITY

0.99+

DellORGANIZATION

0.99+

each sledQUANTITY

0.99+

FirstQUANTITY

0.99+

OneQUANTITY

0.99+

each serverQUANTITY

0.98+

HBCN AI Innovation LabORGANIZATION

0.98+

one solutionQUANTITY

0.98+

bothQUANTITY

0.98+

XeonCOMMERCIAL_ITEM

0.97+

one clusterQUANTITY

0.97+

todayDATE

0.96+

twice a yearQUANTITY

0.96+

around 560 serversQUANTITY

0.96+

C6420COMMERCIAL_ITEM

0.95+

Network H-SeriesCOMMERCIAL_ITEM

0.95+

500 serversQUANTITY

0.95+

IntelORGANIZATION

0.94+

500 fastest supercomputersQUANTITY

0.93+

Dell EMCORGANIZATION

0.92+

single chassisQUANTITY

0.9+

HadoopTITLE

0.9+

OmnipathCOMMERCIAL_ITEM

0.81+

a hundred gigabits per secondQUANTITY

0.79+

applicationsQUANTITY

0.76+

TensorflowTITLE

0.71+

AI Lab TourEVENT

0.67+

CAPTITLE

0.64+

500QUANTITY

0.6+

oneQUANTITY

0.56+

ZenithORGANIZATION

0.55+

top 500QUANTITY

0.54+

MXNetTITLE

0.5+

ZenithCOMMERCIAL_ITEM

0.46+

OmnipathORGANIZATION

0.36+

The Impact of Exascale on Business | Exascale Day


 

>>from around the globe. It's the Q with digital coverage of exa scale day made possible by Hewlett Packard Enterprise. Welcome, everyone to the Cube celebration of Exa Scale Day. Shaheen Khan is here. He's the founding partner, an analyst at Orion X And, among other things, he is the co host of Radio free HPC Shaheen. Welcome. Thanks for coming on. >>Thanks for being here, Dave. Great to be here. How are you >>doing? Well, thanks. Crazy with doing these things, Cove in remote interviews. I wish we were face to face at us at a supercomputer show, but, hey, this thing is working. We can still have great conversations. And And I love talking to analysts like you because you bring an independent perspective. You're very wide observation space. So So let me, Like many analysts, you probably have sort of a mental model or a market model that you look at. So maybe talk about your your work, how you look at the market, and we could get into some of the mega trends that you see >>very well. Very well. Let me just quickly set the scene. We fundamentally track the megatrends of the Information Age And, of course, because we're in the information age, digital transformation falls out of that. And the megatrends that drive that in our mind is Ayotte, because that's the fountain of data five G. Because that's how it's gonna get communicated ai and HBC because that's how we're gonna make sense of it Blockchain and Cryptocurrencies because that's how it's gonna get transacted on. That's how value is going to get transferred from the place took place and then finally, quantum computing, because that exemplifies how things are gonna get accelerated. >>So let me ask you So I spent a lot of time, but I D. C and I had the pleasure of of the High Performance computing group reported into me. I wasn't an HPC analyst, but over time you listen to those guys, you learning. And as I recall, it was HPC was everywhere, and it sounds like we're still seeing that trend where, whether it was, you know, the Internet itself were certainly big data, you know, coming into play. Uh, you know, defense, obviously. But is your background mawr HPC or so that these other technologies that you're talking about it sounds like it's your high performance computing expert market watcher. And then you see it permeating into all these trends. Is that a fair statement? >>That's a fair statement. I did grow up in HPC. My first job out of school was working for an IBM fellow doing payroll processing in the old days on and and And it went from there, I worked for Cray Research. I worked for floating point systems, so I grew up in HPC. But then, over time, uh, we had experiences outside of HPC. So for a number of years, I had to go do commercial enterprise computing and learn about transaction processing and business intelligence and, you know, data warehousing and things like that, and then e commerce and then Web technology. So over time it's sort of expanded. But HPC is a like a bug. You get it and you can't get rid of because it's just so inspiring. So supercomputing has always been my home, so to say >>well and so the reason I ask is I wanted to touch on a little history of the industry is there was kind of a renaissance in many, many years ago, and you had all these startups you had Kendall Square Research Danny Hillis thinking machines. You had convex trying to make many supercomputers. And it was just this This is, you know, tons of money flowing in and and then, you know, things kind of consolidate a little bit and, uh, things got very, very specialized. And then with the big data craze, you know, we've seen HPC really at the heart of all that. So what's your take on on the ebb and flow of the HPC business and how it's evolved? >>Well, HBC was always trying to make sense of the world, was trying to make sense of nature. And of course, as much as we do know about nature, there's a lot we don't know about nature and problems in nature are you can classify those problems into basically linear and nonlinear problems. The linear ones are easy. They've already been solved. The nonlinear wants. Some of them are easy. Many of them are hard, the nonlinear, hard, chaotic. All of those problems are the ones that you really need to solve. The closer you get. So HBC was basically marching along trying to solve these things. It had a whole process, you know, with the scientific method going way back to Galileo, the experimentation that was part of it. And then between theory, you got to look at the experiment and the data. You kind of theorize things. And then you experimented to prove the theories and then simulation and using the computers to validate some things eventually became a third pillar of off science. On you had theory, experiment and simulation. So all of that was going on until the rest of the world, thanks to digitization, started needing some of those same techniques. Why? Because you've got too much data. Simply, there's too much data to ship to the cloud. There's too much data to, uh, make sense of without math and science. So now enterprise computing problems are starting to look like scientific problems. Enterprise data centers are starting to look like national lab data centers, and there is that sort of a convergence that has been taking place gradually, really over the past 34 decades. And it's starting to look really, really now >>interesting, I want I want to ask you about. I was like to talk to analysts about, you know, competition. The competitive landscape is the competition in HPC. Is it between vendors or countries? >>Well, this is a very interesting thing you're saying, because our other thesis is that we are moving a little bit beyond geopolitics to techno politics. And there are now, uh, imperatives at the political level that are driving some of these decisions. Obviously, five G is very visible as as as a piece of technology that is now in the middle of political discussions. Covert 19 as you mentioned itself, is a challenge that is a global challenge that needs to be solved at that level. Ai, who has access to how much data and what sort of algorithms. And it turns out as we all know that for a I, you need a lot more data than you thought. You do so suddenly. Data superiority is more important perhaps than even. It can lead to information superiority. So, yeah, that's really all happening. But the actors, of course, continue to be the vendors that are the embodiment of the algorithms and the data and the systems and infrastructure that feed the applications. So to say >>so let's get into some of these mega trends, and maybe I'll ask you some Colombo questions and weaken geek out a little bit. Let's start with a you know, again, it was one of this when I started the industry. It's all it was a i expert systems. It was all the rage. And then we should have had this long ai winter, even though, you know, the technology never went away. But But there were at least two things that happened. You had all this data on then the cost of computing. You know, declines came down so so rapidly over the years. So now a eyes back, we're seeing all kinds of applications getting infused into virtually every part of our lives. People trying to advertise to us, etcetera. Eso So talk about the intersection of AI and HPC. What are you seeing there? >>Yeah, definitely. Like you said, I has a long history. I mean, you know, it came out of MIT Media Lab and the AI Lab that they had back then and it was really, as you mentioned, all focused on expert systems. It was about logical processing. It was a lot of if then else. And then it morphed into search. How do I search for the right answer, you know, needle in the haystack. But then, at some point, it became computational. Neural nets are not a new idea. I remember you know, we had we had a We had a researcher in our lab who was doing neural networks, you know, years ago. And he was just saying how he was running out of computational power and we couldn't. We were wondering, you know what? What's taking all this difficult, You know, time. And it turns out that it is computational. So when deep neural nets showed up about a decade ago, arm or it finally started working and it was a confluence of a few things. Thalib rhythms were there, the data sets were there, and the technology was there in the form of GPS and accelerators that finally made distractible. So you really could say, as in I do say that a I was kind of languishing for decades before HPC Technologies reignited it. And when you look at deep learning, which is really the only part of a I that has been prominent and has made all this stuff work, it's all HPC. It's all matrix algebra. It's all signal processing algorithms. are computational. The infrastructure is similar to H B. C. The skill set that you need is the skill set of HPC. I see a lot of interest in HBC talent right now in part motivated by a I >>mhm awesome. Thank you on. Then I wanna talk about Blockchain and I can't talk about Blockchain without talking about crypto you've written. You've written about that? I think, you know, obviously supercomputers play a role. I think you had written that 50 of the top crypto supercomputers actually reside in in China A lot of times the vendor community doesn't like to talk about crypto because you know that you know the fraud and everything else. But it's one of the more interesting use cases is actually the primary use case for Blockchain even though Blockchain has so much other potential. But what do you see in Blockchain? The potential of that technology And maybe we can work in a little crypto talk as well. >>Yeah, I think 11 simple way to think of Blockchain is in terms off so called permission and permission less the permission block chains or when everybody kind of knows everybody and you don't really get to participate without people knowing who you are and as a result, have some basis to trust your behavior and your transactions. So things are a lot calmer. It's a lot easier. You don't really need all the supercomputing activity. Whereas for AI the assertion was that intelligence is computer herbal. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for permission. Less Blockchain. The assertion is that trust is computer ble and, it turns out for trust to be computer ble. It's really computational intensive because you want to provide an incentive based such that good actors are rewarded and back actors. Bad actors are punished, and it is worth their while to actually put all their effort towards good behavior. And that's really what you see, embodied in like a Bitcoin system where the chain has been safe over the many years. It's been no attacks, no breeches. Now people have lost money because they forgot the password or some other. You know, custody of the accounts have not been trustable, but the chain itself has managed to produce that, So that's an example of computational intensity yielding trust. So that suddenly becomes really interesting intelligence trust. What else is computer ble that we could do if we if we had enough power? >>Well, that's really interesting the way you described it, essentially the the confluence of crypto graphics software engineering and, uh, game theory, Really? Where the bad actors air Incentive Thio mined Bitcoin versus rip people off because it's because because there are lives better eso eso so that so So Okay, so make it make the connection. I mean, you sort of did. But But I want to better understand the connection between, you know, supercomputing and HPC and Blockchain. We know we get a crypto for sure, like in mind a Bitcoin which gets harder and harder and harder. Um and you mentioned there's other things that we can potentially compute on trust. Like what? What else? What do you thinking there? >>Well, I think that, you know, the next big thing that we are really seeing is in communication. And it turns out, as I was saying earlier, that these highly computational intensive algorithms and models show up in all sorts of places like, you know, in five g communication, there's something called the memo multi and multi out and to optimally manage that traffic such that you know exactly what beam it's going to and worth Antenna is coming from that turns out to be a non trivial, you know, partial differential equation. So next thing you know, you've got HPC in there as and he didn't expect it because there's so much data to be sent, you really have to do some data reduction and data processing almost at the point of inception, if not at the point of aggregation. So that has led to edge computing and edge data centers. And that, too, is now. People want some level of computational capability at that place like you're building a microcontroller, which traditionally would just be a, you know, small, low power, low cost thing. And people want victor instructions. There. People want matrix algebra there because it makes sense to process the data before you have to ship it. So HPCs cropping up really everywhere. And then finally, when you're trying to accelerate things that obviously GP use have been a great example of that mixed signal technologies air coming to do analog and digital at the same time, quantum technologies coming so you could do the you know, the usual analysts to buy to where you have analog, digital, classical quantum and then see which, you know, with what lies where all of that is coming. And all of that is essentially resting on HBC. >>That's interesting. I didn't realize that HBC had that position in five G with multi and multi out. That's great example and then I o t. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing at the edge on you're seeing sort of new computing architectures, potentially emerging, uh, video. The acquisition of arm Perhaps, you know, amore efficient way, maybe a lower cost way of doing specialized computing at the edge it, But it sounds like you're envisioning, actually, supercomputing at the edge. Of course, we've talked to Dr Mark Fernandez about space born computers. That's like the ultimate edge you got. You have supercomputers hanging on the ceiling of the International space station, but But how far away are we from this sort of edge? Maybe not. Space is an extreme example, but you think factories and windmills and all kinds of edge examples where supercomputing is is playing a local role. >>Well, I think initially you're going to see it on base stations, Antenna towers, where you're aggregating data from a large number of endpoints and sensors that are gathering the data, maybe do some level of local processing and then ship it to the local antenna because it's no more than 100 m away sort of a thing. But there is enough there that that thing can now do the processing and do some level of learning and decide what data to ship back to the cloud and what data to get rid of and what data to just hold. Or now those edge data centers sitting on top of an antenna. They could have a half a dozen GPS in them. They're pretty powerful things. They could have, you know, one they could have to, but but it could be depending on what you do. A good a good case study. There is like surveillance cameras. You don't really need to ship every image back to the cloud. And if you ever need it, the guy who needs it is gonna be on the scene, not back at the cloud. So there is really no sense in sending it, Not certainly not every frame. So maybe you can do some processing and send an image every five seconds or every 10 seconds, and that way you can have a record of it. But you've reduced your bandwidth by orders of magnitude. So things like that are happening. And toe make sense of all of that is to recognize when things changed. Did somebody come into the scene or is it just you know that you know, they became night, So that's sort of a decision. Cannot be automated and fundamentally what is making it happen? It may not be supercomputing exa scale class, but it's definitely HPCs, definitely numerically oriented technologies. >>Shane, what do you see happening in chip architectures? Because, you see, you know the classical intel they're trying to put as much function on the real estate as possible. We've seen the emergence of alternative processors, particularly, uh, GP use. But even if f b g A s, I mentioned the arm acquisition, so you're seeing these alternative processors really gain momentum and you're seeing data processing units emerge and kind of interesting trends going on there. What do you see? And what's the relationship to HPC? >>Well, I think a few things are going on there. Of course, one is, uh, essentially the end of Moore's law, where you cannot make the cycle time be any faster, so you have to do architectural adjustments. And then if you have a killer app that lends itself to large volume, you can build silicon. That is especially good for that now. Graphics and gaming was an example of that, and people said, Oh my God, I've got all these cores in there. Why can't I use it for computation? So everybody got busy making it 64 bit capable and some grass capability, And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Well, I don't really need 64 but maybe I can do it in 32 or 16. So now you do it for that, and then tens, of course, come about. And so there's that sort of a progression of architecture, er trumping, basically cycle time. That's one thing. The second thing is scale out and decentralization and distributed computing. And that means that the inter communication and intra communication among all these notes now becomes an issue big enough issue that maybe it makes sense to go to a DPU. Maybe it makes sense to go do some level of, you know, edge data centers like we were talking about on then. The third thing, really is that in many of these cases you have data streaming. What is really coming from I o t, especially an edge, is that data is streaming and when data streaming suddenly new architectures like F B G. A s become really interesting and and and hold promise. So I do see, I do see FPG's becoming more prominent just for that reason, but then finally got a program all of these things on. That's really a difficulty, because what happens now is that you need to get three different ecosystems together mobile programming, embedded programming and cloud programming. And those are really three different developer types. You can't hire somebody who's good at all three. I mean, maybe you can, but not many. So all of that is challenges that are driving this this this this industry, >>you kind of referred to this distributed network and a lot of people you know, they refer to this. The next generation cloud is this hyper distributed system. When you include the edge and multiple clouds that etcetera space, maybe that's too extreme. But to your point, at least I inferred there's a There's an issue of Leighton. See, there's the speed of light s So what? What? What is the implication then for HBC? Does that mean I have tow Have all the data in one place? Can I move the compute to the data architecturally, What are you seeing there? >>Well, you fundamentally want to optimize when to move data and when to move, Compute. Right. So is it better to move data to compute? Or is it better to bring compute to data and under what conditions? And the dancer is gonna be different for different use cases. It's like, really, is it worth my while to make the trip, get my processing done and then come back? Or should I just developed processing capability right here? Moving data is really expensive and relatively speaking. It has become even more expensive, while the price of everything has dropped down its price has dropped less than than than like processing. So it is now starting to make sense to do a lot of local processing because processing is cheap and moving data is expensive Deep Use an example of that, Uh, you know, we call this in C two processing like, you know, let's not move data. If you don't have to accept that we live in the age of big data, so data is huge and wants to be moved. And that optimization, I think, is part of what you're what you're referring to. >>Yeah, So a couple examples might be autonomous vehicles. You gotta have to make decisions in real time. You can't send data back to the cloud flip side of that is we talk about space borne computers. You're collecting all this data You can at some point. You know, maybe it's a year or two after the lived out its purpose. You ship that data back and a bunch of disk drives or flash drives, and then load it up into some kind of HPC system and then have at it and then you doom or modeling and learn from that data corpus, right? I mean those air, >>right? Exactly. Exactly. Yeah. I mean, you know, driverless vehicles is a great example, because it is obviously coming fast and furious, no pun intended. And also, it dovetails nicely with the smart city, which dovetails nicely with I o. T. Because it is in an urban area. Mostly, you can afford to have a lot of antenna, so you can give it the five g density that you want. And it requires the Layton sees. There's a notion of how about if my fleet could communicate with each other. What if the car in front of me could let me know what it sees, That sort of a thing. So, you know, vehicle fleets is going to be in a non opportunity. All of that can bring all of what we talked about. 21 place. >>Well, that's interesting. Okay, so yeah, the fleets talking to each other. So kind of a Byzantine fault. Tolerance. That problem that you talk about that z kind of cool. I wanna I wanna sort of clothes on quantum. It's hard to get your head around. Sometimes You see the demonstrations of quantum. It's not a one or zero. It could be both. And you go, What? How did come that being so? And And of course, there it's not stable. Uh, looks like it's quite a ways off, but the potential is enormous. It's of course, it's scary because we think all of our, you know, passwords are already, you know, not secure. And every password we know it's gonna get broken. But give us the give us the quantum 101 And let's talk about what the implications. >>All right, very well. So first off, we don't need to worry about our passwords quite yet. That that that's that's still ways off. It is true that analgesic DM came up that showed how quantum computers can fact arise numbers relatively fast and prime factory ization is at the core of a lot of cryptology algorithms. So if you can fact arise, you know, if you get you know, number 21 you say, Well, that's three times seven, and those three, you know, three and seven or prime numbers. Uh, that's an example of a problem that has been solved with quantum computing, but if you have an actual number, would like, you know, 2000 digits in it. That's really harder to do. It's impossible to do for existing computers and even for quantum computers. Ways off, however. So as you mentioned, cubits can be somewhere between zero and one, and you're trying to create cubits Now there are many different ways of building cubits. You can do trapped ions, trapped ion trapped atoms, photons, uh, sometimes with super cool, sometimes not super cool. But fundamentally, you're trying to get these quantum level elements or particles into a superimposed entanglement state. And there are different ways of doing that, which is why quantum computers out there are pursuing a lot of different ways. The whole somebody said it's really nice that quantum computing is simultaneously overhyped and underestimated on. And that is that is true because there's a lot of effort that is like ways off. On the other hand, it is so exciting that you don't want to miss out if it's going to get somewhere. So it is rapidly progressing, and it has now morphed into three different segments. Quantum computing, quantum communication and quantum sensing. Quantum sensing is when you can measure really precise my new things because when you perturb them the quantum effects can allow you to measure them. Quantum communication is working its way, especially in financial services, initially with quantum key distribution, where the key to your cryptography is sent in a quantum way. And the data sent a traditional way that our efforts to do quantum Internet, where you actually have a quantum photon going down the fiber optic lines and Brookhaven National Labs just now demonstrated a couple of weeks ago going pretty much across the, you know, Long Island and, like 87 miles or something. So it's really coming, and and fundamentally, it's going to be brand new algorithms. >>So these examples that you're giving these air all in the lab right there lab projects are actually >>some of them are in the lab projects. Some of them are out there. Of course, even traditional WiFi has benefited from quantum computing or quantum analysis and, you know, algorithms. But some of them are really like quantum key distribution. If you're a bank in New York City, you very well could go to a company and by quantum key distribution services and ship it across the you know, the waters to New Jersey on that is happening right now. Some researchers in China and Austria showed a quantum connection from, like somewhere in China, to Vienna, even as far away as that. When you then put the satellite and the nano satellites and you know, the bent pipe networks that are being talked about out there, that brings another flavor to it. So, yes, some of it is like real. Some of it is still kind of in the last. >>How about I said I would end the quantum? I just e wanna ask you mentioned earlier that sort of the geopolitical battles that are going on, who's who are the ones to watch in the Who? The horses on the track, obviously United States, China, Japan. Still pretty prominent. How is that shaping up in your >>view? Well, without a doubt, it's the US is to lose because it's got the density and the breadth and depth of all the technologies across the board. On the other hand, information age is a new eyes. Their revolution information revolution is is not trivial. And when revolutions happen, unpredictable things happen, so you gotta get it right and and one of the things that these technologies enforce one of these. These revolutions enforce is not just kind of technological and social and governance, but also culture, right? The example I give is that if you're a farmer, it takes you maybe a couple of seasons before you realize that you better get up at the crack of dawn and you better do it in this particular season. You're gonna starve six months later. So you do that to three years in a row. A culture has now been enforced on you because that's how it needs. And then when you go to industrialization, you realize that Gosh, I need these factories. And then, you know I need workers. And then next thing you know, you got 9 to 5 jobs and you didn't have that before. You don't have a command and control system. You had it in military, but not in business. And and some of those cultural shifts take place on and change. So I think the winner is going to be whoever shows the most agility in terms off cultural norms and governance and and and pursuit of actual knowledge and not being distracted by what you think. But what actually happens and Gosh, I think these exa scale technologies can make the difference. >>Shaheen Khan. Great cast. Thank you so much for joining us to celebrate the extra scale day, which is, uh, on 10. 18 on dso. Really? Appreciate your insights. >>Likewise. Thank you so much. >>All right. Thank you for watching. Keep it right there. We'll be back with our next guest right here in the Cube. We're celebrating Exa scale day right back.

Published Date : Oct 16 2020

SUMMARY :

he is the co host of Radio free HPC Shaheen. How are you to analysts like you because you bring an independent perspective. And the megatrends that drive that in our mind And then you see it permeating into all these trends. You get it and you can't get rid And it was just this This is, you know, tons of money flowing in and and then, And then you experimented to prove the theories you know, competition. And it turns out as we all know that for a I, you need a lot more data than you thought. ai winter, even though, you know, the technology never went away. is similar to H B. C. The skill set that you need is the skill set community doesn't like to talk about crypto because you know that you know the fraud and everything else. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for Well, that's really interesting the way you described it, essentially the the confluence of crypto is coming from that turns out to be a non trivial, you know, partial differential equation. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing Did somebody come into the scene or is it just you know that you know, they became night, Because, you see, you know the classical intel they're trying to put And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Can I move the compute to the data architecturally, What are you seeing there? an example of that, Uh, you know, we call this in C two processing like, it and then you doom or modeling and learn from that data corpus, so you can give it the five g density that you want. It's of course, it's scary because we think all of our, you know, passwords are already, So if you can fact arise, you know, if you get you know, number 21 you say, and ship it across the you know, the waters to New Jersey on that is happening I just e wanna ask you mentioned earlier that sort of the geopolitical And then next thing you know, you got 9 to 5 jobs and you didn't have that before. Thank you so much for joining us to celebrate the Thank you so much. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Shaheen KhanPERSON

0.99+

ChinaLOCATION

0.99+

ViennaLOCATION

0.99+

AustriaLOCATION

0.99+

MIT Media LabORGANIZATION

0.99+

New York CityLOCATION

0.99+

Orion XORGANIZATION

0.99+

New JerseyLOCATION

0.99+

50QUANTITY

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

9QUANTITY

0.99+

ShanePERSON

0.99+

Long IslandLOCATION

0.99+

AI LabORGANIZATION

0.99+

Cray ResearchORGANIZATION

0.99+

Brookhaven National LabsORGANIZATION

0.99+

JapanLOCATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

5 jobsQUANTITY

0.99+

CovePERSON

0.99+

2000 digitsQUANTITY

0.99+

United StatesLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Danny HillisPERSON

0.99+

a yearQUANTITY

0.99+

half a dozenQUANTITY

0.98+

third thingQUANTITY

0.98+

bothQUANTITY

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.98+

64QUANTITY

0.98+

Exa Scale DayEVENT

0.98+

32QUANTITY

0.98+

six months laterDATE

0.98+

64 bitQUANTITY

0.98+

third pillarQUANTITY

0.98+

16QUANTITY

0.97+

firstQUANTITY

0.97+

HBCORGANIZATION

0.97+

one placeQUANTITY

0.97+

87 milesQUANTITY

0.97+

tensQUANTITY

0.97+

Mark FernandezPERSON

0.97+

zeroQUANTITY

0.97+

ShaheenPERSON

0.97+

sevenQUANTITY

0.96+

first jobQUANTITY

0.96+

HPC TechnologiesORGANIZATION

0.96+

twoQUANTITY

0.94+

three different ecosystemsQUANTITY

0.94+

every 10 secondsQUANTITY

0.94+

every five secondsQUANTITY

0.93+

ByzantinePERSON

0.93+

Exa scale dayEVENT

0.93+

second thingQUANTITY

0.92+

MoorePERSON

0.9+

years agoDATE

0.89+

HPCORGANIZATION

0.89+

three yearsQUANTITY

0.89+

three different developerQUANTITY

0.89+

Exascale DayEVENT

0.88+

GalileoPERSON

0.88+

three timesQUANTITY

0.88+

a couple of weeks agoDATE

0.85+

exa scale dayEVENT

0.84+

D. CPERSON

0.84+

many years agoDATE

0.81+

a decade agoDATE

0.81+

aboutDATE

0.81+

C twoTITLE

0.81+

one thingQUANTITY

0.8+

10. 18DATE

0.8+

DrPERSON

0.79+

past 34 decadesDATE

0.77+

two thingsQUANTITY

0.76+

LeightonORGANIZATION

0.76+

11 simple wayQUANTITY

0.75+

21 placeQUANTITY

0.74+

three different segmentsQUANTITY

0.74+

more than 100 mQUANTITY

0.73+

FPGORGANIZATION

0.73+

decadesQUANTITY

0.71+

fiveQUANTITY

0.7+