Image Title

Search Results for Brest:

Ajay Mungara, Intel | Red Hat Summit 2022


 

>>mhm. Welcome back to Boston. This is the cubes coverage of the Red Hat Summit 2022. The first Red Hat Summit we've done face to face in at least two years. 2019 was our last one. We're kind of rounding the far turn, you know, coming up for the home stretch. My name is Dave Valentin here with Paul Gillon. A J monger is here is a senior director of Iot. The Iot group for developer solutions and engineering at Intel. AJ, thanks for coming on the Cube. Thank you so much. We heard your colleague this morning and the keynote talking about the Dev Cloud. I feel like I need a Dev Cloud. What's it all about? >>So, um, we've been, uh, working with developers and the ecosystem for a long time, trying to build edge solutions. A lot of time people think about it. Solutions as, like, just computer the edge. But what really it is is you've got to have some component of the cloud. There is a network, and there is edge and edge is complicated because of the variety of devices that you need. And when you're building a solution, you got to figure out, like, where am I going to push the computer? How much of the computer I'm going to run in the cloud? How much of the computer? I'm gonna push it at the network and how much I need to run it at the edge. A lot of times what happens for developers is they don't have one environment where all of the three come together. And so what we said is, um, today the way it works is you have all these edge devices that customers by the instal, they set it up and they try to do all of that. And then they have a cloud environment they do to their development. And then they figure out how all of this comes together. And all of these things are only when they are integrating it at the customer at the solution space is when they try to do it. So what we did is we took all of these edge devices, put it in the cloud and gave one environment for cloud to the edge. Very good to your complete solution. >>Essentially simulates. >>No, it's not >>simulating span. So the cloud spans the cloud, the centralised cloud out to the edge. You >>know, what we did is we took all of these edge devices that will theoretically get deployed at the edge like we took all these variety of devices and putting it put it in a cloud environment. So these are non rack mountable devices that you can buy in the market today that you just have, like, we have about 500 devices in the cloud that you have from atom to call allusions to F. P. G s to head studio cards to graphics. All of these devices are available to you. So in one environment you have, like, you can connect to any of the cloud the hyper scholars, you could connect to any of these network devices. You can define your network topology. You could bring in any of your sources that is sitting in the gate repository or docker containers that may be sitting somewhere in a cloud environment, or it could be sitting on a docker hub. You can pull all of these things together, and we give you one place where you can build it where you can test it. You can performance benchmark it so you can know when you're actually going to the field to deploy it. What type of sizing you need. So >>let me show you, understand? If I want to test, uh, an actual edge device using 100 gig Ethernet versus an Mpls versus the five G, you can do all that without virtualizing. >>So all the H devices are there today, and the network part of it, we are building with red hat together where we are putting everything on this environment. So the network part of it is not quite yet solved, but that's what we want to solve. But the goal is here is you can let's say you have five cameras or you have 50 cameras with different type of resolutions. You want to do some ai inference type of workloads at the edge. What type of compute you need, what type of memory you need, How many devices do you need and where do you want to push the data? Because security is very important at the edge. So you gotta really figure out like I've got to secure the data on flight. I want to secure the data at Brest, and how do you do the governance of it. How do you kind of do service governance? So that all the services different containers that are running on the edge device, They're behaving well. You don't have one container hogging up all the memory or hogging up all the compute, or you don't have, like, certain points in the day. You might have priority for certain containers. So all of these mortals, where do you run it? So we have an environment that you could run all of that. >>Okay, so take that example of AI influencing at the edge. So I've got an edge device and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing in real time. You got something? They become some kind of streaming data coming in, and I want you to persist, uh, every hour on the hour. I want to save that time stamp. Or if the if some event, if a deer runs across the headlights, I want you to persist that day to send that back to the cloud and you can develop that tested, benchmark >>it right, and then you can say that. Okay, look in this environment I have, like, five cameras, like at different angles, and you want to kind of try it out. And what we have is a product which is into, um, open vino, which is like an open source product, which does all of the optimizations you need for age in France. So you develop the like to recognise the deer in your example. I developed the training model somewhere in the cloud. Okay, so I have, like, I developed with all of the things have annotated the different video streams. And I know that I'm recognising a deer now. Okay, so now you need to figure out Like when the deer is coming and you want to immediately take an action. You don't want to send all of your video streams to the cloud. It's too expensive. Bandwidth costs a lot. So you want to compute that inference at the edge? Okay. In order to do that inference at the edge, you need some environment. You should be able to do it. And to build that solution What type of age device do you really need? What type of compute you need? How many cameras are you computing it? What different things you're not only recognising a deer, probably recognising some other objects could do all of that. In fact, one of the things happened was I took my nephew to San Diego Zoo and he was very disappointed that he couldn't see the chimpanzees. Uh, that was there, right, the gorillas and other things. So he was very sad. So I said, All right, there should be a better way. I saw, like there was a stream of the camera feed that was there. So what we did is we did an edge in friends and we did some logic to say, At this time of the day, the gorillas get fed, so there's likelihood of you actually seeing the gorilla is very high. So you just go at that point and so that you see >>it, you >>capture, That's what you do, and you want to develop that entire solution. It's based on whether, based on other factors, you need to bring all of these services together and build a solution, and we offer an environment that allows you to do it. Will >>you customise the the edge configuration for the for the developer If if they want 50 cameras. That's not You don't have 50 cameras available, right? >>It's all cameras. What we do is we have a streaming capability that we support so you can upload all your videos. And you can say I want to now simulate 50 streams. Want to simulate 30 streams? Or I want to do this right? Or just like two or three videos that you want to just pull in. And you want to be able to do the infant simultaneously, running different algorithms at the edge. All of that is supported, and the bigger challenge at the edge is developing. Solution is fine. And now when you go to actual deployment and post deployment monitoring, maintenance, make sure that you're like managing it. It's very complicated. What we have seen is over 50% 51% to be precise of developers are developed some kind of a cloud native applications recently, right? So that we believe that if you bring that type of a cloud native development model to the edge, then you're scaling problem. Your maintenance problem, you're like, how do you actually deploy it? All of these challenges can be better managed, Um, and if you run all of that is an orchestration later on kubernetes and we run everything on top of open shift, so you have a deployment ready solution already there it's everything is containerised everything. You have it as health charged Dr Composed. You have all their you have tested and in this environment, and now you go take that to the deployment. And if it is there on any standard kubernetes environment or in an open ship, you can just straight away deploy your application. >>What's that edge architecture looked like? What's Intel's and red hats philosophy around? You know what's programmable and it's different. I know you can run a S, a p a data centre. You guys got that covered? What's the edge look like? What's that architecture of silicon middleware? Describe that for us. >>So at the edge, you think about it, right? It can run traditional, Uh, in an industrial PC. You have a lot of Windows environment. You have a lot of the next. They're now in a in an edge environment. Quite a few of these devices. I'm not talking about Farage where there are tiny micro controllers and these devices I'm talking about those devices that connect to these forage devices. Collect the data. Do some analytics do some compute that type of thing. You have foraged devices. Could be a camera. Could be a temperature sensor. Could be like a weighing scale. Could be anything. It could be that forage and then all of that data instead of pushing all the data to the cloud. In order for you to do the analysis, you're going to have some type of an edge set of devices where it is collecting all this data, doing some decisions that's close to the data. You're making some analysis there, all of that stuff, right? So you need some analysis tools, you need certain other things. And let's say that you want to run like, UH, average costs or rail or any of these operating systems at the edge. Then you have an ability for you to manage all of that. Using a control note, the control node can also sit at the edge. In some cases, like in a smart factory, you have a little data centre in a smart factory or even in a retail >>store >>behind a closet. You have, like a bunch of devices that are sitting there, correct. And those devices all can be managed and clustered in an environment. So now the question is, how do you deploy applications to that edge? How do you collect all the data that is sitting through the camera? Other sensors and you're processing it close to where the data is being generated make immediate decisions. So the architecture would look like you have some club which does some management of this age devices management of this application, some type of control. You have some network because you need to connect to that. Then you have the whole plethora of edge, starting from an hybrid environment where you have an entire, like a mini data centre sitting at the edge. Or it could be one or two of these devices that are just collecting data from these sensors and processing it that is the heart of the other challenge. The architecture varies from different verticals, like from smart cities to retail to healthcare to industrial. They have all these different variations. They need to worry about these, uh, different environments they are going to operate under, uh, they have different regulations that they have to look into different security protocols that they need to follow. So your solution? Maybe it is just recognising people and identifying if they are wearing a helmet or a coal mine, right, whether they are wearing a safety gear equipment or not, that solution versus you are like driving in a traffic in a bike, and you, for safety reasons. We want to identify the person is wearing a helmet or not. Very different use cases, very different environments, different ways in which you are operating. But that is where the developer needs to have. Similar algorithms are used, by the way, but how you deploy it very, quite a bit. >>But the Dev Cloud make sure I understand it. You talked about like a retail store, a great example. But that's a general purpose infrastructure that's now customised through software for that retail environment. Same thing with Telco. Same thing with the smart factory, you said, not the far edge, right, but that's coming in the future. Or is that well, that >>extends far edge, putting everything in one cloud environment. We did it right. In fact, I put some cameras on some like ipads and laptops, and we could stream different videos did all of that in a data centre is a boring environment, right? What are you going to see? A bunch of racks and service, So putting far edge devices there didn't make sense. So what we did is you could just have an easy ability for you to stream or connect or a Plourde This far edge data that gets generated at the far edge. Like, say, time series data like you can take some of the time series data. Some of the sensor data are mostly camera data videos. So you upload those videos and that is as good as your streaming those videos. Right? And that means you are generating that data. And then you're developing your solution with the assumption that the camera is observing whatever is going on. And then you do your age inference and you optimise it. You make sure that you size it, and then you have a complete solution. >>Are you supporting all manner of microprocessors at the edge, including non intel? >>Um, today it is all intel, but the plan, because we are really promoting the whole open ecosystem and things like that in the future. Yes, that is really talking about it, so we want to be able to do that in the future. But today it's been like a lot of the we were trying to address the customers that we are serving today. We needed an environment where they could do all of this, for example, and what circumstances would use I five versus i nine versus putting an algorithm on using a graphics integrated graphics versus running it on a CPU or running it on a neural computer stick. It's hard, right? You need to buy all those devices you need to experiment your solutions on all of that. It's hard. So having everything available in one environment, you could compare and contrast to see what type of a vocal or makes best sense. But it's not >>just x 86 x 86 your portfolio >>portfolio of F. P. G s of graphics of like we have all what intel supports today and in future, we would want to open it up. So how >>do developers get access to this cloud? >>It is all free. You just have to go sign up and register and, uh, you get access to it. It is difficult dot intel dot com You go there, and the container playground is all available for free for developers to get access to it. And you can bring in container workloads there, or even bare metal workloads. Um, and, uh, yes, all of it is available for you >>need to reserve the endpoint devices. >>Comment. That is where it is. An interesting technology. >>Govern this. Correct. >>So what we did was we built a kind of a queuing system. Okay, So, schedule, er so you develop your application in a controlled north, and only you need the edge device when you're scheduling that workload. Okay, so we have this scheduling systems, like we use Kafka and other technologies to do the scheduling in the container workload environment, which are all the optimised operators that are available in an open shift, um, environment. So we regard those operators. Were we installed it. So what happens is you take your work, lord, and you run it. Let's say on an I seven device, when you're running that workload and I summon device, that device is dedicated to you. Okay, So and we've instrumented each of these devices with telemetry so we could see at the point your workload is running on that particular device. What is the memory looking like power looking like How hard is the device running? What is a compute looking like? So we capture all that metrics. Then what you do is you take it and run it on a 99 or run it on a graphic, so can't run it on an F p g a. Then you compare and contrast. And you say Huh? Okay for this particular work, Lord, this device makes best sense. In some cases, I'll tell you. Right, Uh, developers have come back and told me I don't need a bigger process that I need bigger memory. >>Yeah, sure, >>right. And some cases they've said, Look, I have I want to prioritise accuracy over performance because if you're in a healthcare setting, accuracy is more important. In some cases, they have optimised it for the size of the device because it needs to fit in the right environment in the right place. So every use case where you optimise is up to the solution up to the developer, and we give you an ability for you to do that kind >>of folks are you seeing? You got hardware developers, you get software developers are right, people coming in. And >>we have a lot of system integrators. We have enterprises that are coming in. We are seeing a lot of, uh, software solution developers, independent software developers. We also have a lot of students are coming in free environment for them to kind of play with in sort of them having to buy all of these devices. We're seeing those people. Um I mean, we are pulling through a lot of developers in this environment currently, and, uh, we're getting, of course, feedback from the developers. We are just getting started here. We are continuing to improve our capabilities. We are adding, like, virtualisation capabilities. We are working very closely with red hat to kind of showcase all the goodness that's coming out of red hat, open shift and other innovations. Right? We heard, uh, like, you know, in one of the open shift sessions, they're talking about micro shifts. They're talking about hyper shift, the talking about a lot of these innovations, operators, everything that is coming together. But where do developers play with all of this? If you spend half your time trying to configure it, instal it and buy the hardware, Trying to figure it out. You lose patience. What we have time, you lose time. What is time and it's complicated, right? How do you set up? Especially when you involve cloud. It has network. It has got the edge. You need all of that right? Set up. So what we have done is we've set up everything for you. You just come in. And by the way, not only just that what we realised is when you go talk to customers, they don't want to listen to all our optimizations processors and all that. They want to say that I am here to solve my retail problem. I want to count the people coming into my store, right. I want to see that if there is any spills that I recognise and I want to go clean it up before a customer complaints about it or I have a brain tumour segmentation where I want to identify if the tumour is malignant or not, right and I want to telehealth solutions. So they're really talking about these use cases that are talking about all these things. So What we did is we build many of these use cases by talking to customers. We open sourced it and made it available on Death Cloud for developers to use as a starting point so that they have this retail starting point or they have this healthcare starting point. All these use cases so that they have all the court we have showed them how to contain arise it. The biggest problem is developers still don't know at the edge how to bring a legacy application and make it cloud native. So they just wrap it all into one doctor and they say, OK, now I'm containerised got a lot more to do. So we tell them how to do it, right? So we train these developers, we give them an opportunity to experiment with all these use cases so that they get closer and closer to what the customer solutions need to be. >>Yeah, we saw that a lot with the early cloud where they wrapped their legacy apps in a container, shove it into the cloud. Say it's really hosting a legacy. Apps is all it was. It wasn't It didn't take advantage of the cloud. Never Now people come around. It sounds like a great developer. Free resource. Take advantage of that. Where do they go? They go. >>So it's def cloud dot intel dot com >>death cloud dot intel dot com. Check it out. It's a great freebie, AJ. Thanks very much. >>Thank you very much. I really appreciate your time. All right, >>keep it right there. This is Dave Volonte for Paul Dillon. We're right back. Covering the cube at Red Hat Summit 2022. >>Mhm. Yeah. Mhm. Mm.

Published Date : May 11 2022

SUMMARY :

We're kind of rounding the far turn, you know, coming up for the home stretch. devices that you need. So the cloud spans the cloud, the centralised You can pull all of these things together, and we give you one place where you can build it where gig Ethernet versus an Mpls versus the five G, you can do all that So all of these mortals, where do you run it? and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing So you develop the like to recognise the deer in your example. and we offer an environment that allows you to do it. you customise the the edge configuration for the for the developer So that we believe that if you bring that type of a cloud native I know you can run a S, a p a data So at the edge, you think about it, right? So now the question is, how do you deploy applications to that edge? Same thing with the smart factory, you said, So what we did is you could just have an easy ability for you to stream or connect You need to buy all those devices you need to experiment your solutions on all of that. portfolio of F. P. G s of graphics of like we have all what intel And you can bring in container workloads there, or even bare metal workloads. That is where it is. So what happens is you take your work, So every use case where you optimise is up to the You got hardware developers, you get software developers are What we have time, you lose time. container, shove it into the cloud. Check it out. Thank you very much. Covering the cube at Red Hat Summit 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave ValentinPERSON

0.99+

Ajay MungaraPERSON

0.99+

Paul GillonPERSON

0.99+

TelcoORGANIZATION

0.99+

twoQUANTITY

0.99+

FranceLOCATION

0.99+

oneQUANTITY

0.99+

50 camerasQUANTITY

0.99+

five camerasQUANTITY

0.99+

50 streamsQUANTITY

0.99+

30 streamsQUANTITY

0.99+

Dave VolontePERSON

0.99+

100 gigQUANTITY

0.99+

BostonLOCATION

0.99+

Paul DillonPERSON

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.99+

three videosQUANTITY

0.99+

Red Hat Summit 2022EVENT

0.99+

about 500 devicesQUANTITY

0.98+

Red Hat SummitEVENT

0.98+

ipadsCOMMERCIAL_ITEM

0.98+

IotORGANIZATION

0.98+

KafkaTITLE

0.98+

eachQUANTITY

0.97+

WindowsTITLE

0.97+

threeQUANTITY

0.97+

AJPERSON

0.97+

firstQUANTITY

0.96+

red hatTITLE

0.96+

Death CloudTITLE

0.95+

one doctorQUANTITY

0.94+

over 50% 51%QUANTITY

0.93+

FarageORGANIZATION

0.92+

intel dot comORGANIZATION

0.9+

intelORGANIZATION

0.9+

this morningDATE

0.89+

one cloudQUANTITY

0.88+

San Diego ZooLOCATION

0.87+

99QUANTITY

0.86+

one containerQUANTITY

0.86+

one environmentQUANTITY

0.86+

2019DATE

0.85+

halfQUANTITY

0.85+

one placeQUANTITY

0.84+

least two yearsQUANTITY

0.83+

Dev CloudTITLE

0.81+

mongerPERSON

0.77+

timeQUANTITY

0.76+

I fiveOTHER

0.76+

P. GPERSON

0.75+

red hatTITLE

0.74+

two ofQUANTITY

0.73+

BrestORGANIZATION

0.63+

nineTITLE

0.61+

86OTHER

0.58+

devicesQUANTITY

0.56+

thingsQUANTITY

0.51+

five GOTHER

0.49+

86QUANTITY

0.48+

CloudTITLE

0.46+

sevenCOMMERCIAL_ITEM

0.4+

dotORGANIZATION

0.34+

cloudTITLE

0.32+