MANUFACTURING V1b | CLOUDERA
>>Welcome to our industry. Drill-downs from manufacturing. I'm here with Michael Gerber, who is the managing director for automotive and manufacturing solutions at cloud era. And in this first session, we're going to discuss how to drive transportation efficiencies and improve sustainability with data connected trucks are fundamental to optimizing fleet performance costs and delivering new services to fleet operators. And what's going to happen here is Michael's going to present some data and information, and we're gonna come back and have a little conversation about what we just heard. Michael, great to see you over to you. >>Oh, thank you, Dave. And I appreciate having this conversation today. Hey, um, you know, this is actually an area connected trucks. You know, this is an area that we have seen a lot of action here at Cloudera. And I think the reason is kind of important, right? Because, you know, first of all, you can see that, you know, this change is happening very, very quickly, right? 150% growth is forecast by 2022. Um, and the reasons, and I think this is why we're seeing a lot of action and a lot of growth is that there are a lot of benefits, right? We're talking about a B2B type of situation here. So this is truck made truck makers providing benefits to fleet operators. And if you look at the F the top fleet operator, uh, the top benefits that fleet operators expect, you see this in the graph over here. >>Now almost 80% of them expect improved productivity, things like improved routing rates. So route efficiencies and improve customer service decrease in fuel consumption, but better technology. This isn't technology for technology sake, these connected trucks are coming onto the marketplace because Hey, it can provide for Mendez value to the business. And in this case, we're talking about fleet operators and fleet efficiencies. So, you know, one of the things that's really important to be able to enable this right, um, trucks are becoming connected because at the end of the day, um, we want to be able to provide fleet deficiencies through connected truck, um, analytics and machine learning. Let me explain to you a little bit about what we mean by that, because what, you know, how this happens is by creating a connected vehicle analytics machine learning life cycle, and to do that, you need to do a few different things, right? >>You start off of course, with connected trucks in the field. And, you know, you can have many of these trucks cause typically you're dealing at a truck level and at a fleet level, right? You want to be able to do analytics and machine learning to improve performance. So you start off with these trucks. And the first you need to be able to do is connect to those products, right? You have to have an intelligent edge where you can collect that information from the trucks. And by the way, once you conducted the, um, this information from the trucks, you want to be able to analyze that data in real-time and take real-time actions. Now what I'm going to show you the ability to take this real-time action is actually the result of your machine learning license. Let me explain to you what I mean by that. >>So we have this trucks, we start to collect data from it right at the end of the day. Well we'd like to be able to do is pull that data into either your data center or into the cloud where we can start to do more advanced analytics. And we start with being able to ingest that data into the cloud, into that enterprise data lake. We store that data. We want to enrich it with other data sources. So for example, if you're doing truck predictive maintenance, you want to take that sensor data that you've connected collected from those trucks. And you want to augment that with your dealership, say service information. Now you have, you know, you have sensor data and there was salting repair orders. You're now equipped to do things like predict one day maintenance will work correctly for all the data sets that you need to be able to do that. >>So what do you do here? Like I said, you adjusted your storage, you're enriching it with data, right? You're processing that data. You're aligning say the sensor data to that transactional system data from your, uh, from your, your pair maintenance systems, you know, you're bringing it together so that you can do two things you can do. First of all, you could do self-service BI on that date, right? You can do things like fleet analytics, but more importantly, what I was talking to you about before is you now have the data sets to be able to do create machine learning models. So if you have the sensor right values and the need, for example, for, for a dealership repair, or as you could start to correlate, which sensor values predicted the need for maintenance, and you could build out those machine learning models. And then as I mentioned to you, you could push those machine learning models back out to the edge, which is how you would then take those real-time action. >>I mentioned earlier as that data that then comes through in real-time, you're running it against that model, and you can take some real time actions. This is what we are, this, this, this, this analytics and machine learning model, um, machine learning life cycle is exactly what Cloudera enables this end-to-end ability to ingest, um, stroke, you know, store it, um, put a query, lay over it, um, machine learning models, and then run those machine learning models. Real-time now that's what we, that's what we do as a business. Now when such customer, and I just wanted to give you one example, um, a customer that we have worked with to provide these types of results is Navistar and Navistar was kind of an early, early adopter of connected truck analytics. And they provided these capabilities to their fleet operators, right? And they started off, uh, by, um, by, you know, connecting 475,000 trucks to up to well over a million now. >>And you know, the point here is with that, they were centralizing data from their telematics service providers, from their trucks, from telematics service providers. They're bringing in things like weather data and all those types of things. Um, and what they started to do was to build out machine learning models, aimed at predictive maintenance. And what's really interesting is that you see that Navistar, um, made tremendous strides in reducing the need or the expense associated with maintenance, right? So rather than waiting for a truck to break and then fixing it, they would predict when that truck needs service, condition-based monitoring and service it before it broke down so that you could do that in a much more cost-effective manner. And if you see the benefits, right, they, they reduced maintenance costs 3 cents a mile, um, from the, you know, down from the industry average of 15 cents a mile down to 12 cents cents a mile. >>So this was a tremendous success for Navistar. And we're seeing this across many of our, um, um, you know, um, uh, truck manufacturers. We were working with many of the truck OEMs and they are all working to achieve, um, you know, very, very similar types of, um, benefits to their customers. So just a little bit about Navistar. Um, now we're gonna turn to Q and a, Dave's got some questions for me in a second, but before we do that, if you want to learn more about our, how we work with connected vehicles and autonomous vehicles, please go to our lives or to our website, what you see up, uh, up on the screen, there's the URLs cloudera.com for slash solutions for slash manufacturing. And you'll see a whole slew of, um, um, lateral and information, uh, in much more detail in terms of how we connect, um, trucks to fleet operators who provide analytics, use cases that drive dramatically improved performance. So with that being said, I'm going to turn it over to Dave for questions. >>Thank you. Uh, Michael, that's a great example. You've got, I love the life cycle. You can visualize that very well. You've got an edge use case you do in both real time inference, really at the edge. And then you're blending that sensor data with other data sources to enrich your models. And you can push that back to the edge. That's that lifecycle. So really appreciate that, that info. Let me ask you, what are you seeing as the most common connected vehicle when you think about analytics and machine learning, the use cases that you see customers really leaning into. >>Yeah, that's really, that's a great question. They, you know, cause you know, everybody always thinks about machine learning. Like this is the first thing you go, well, actually it's not right for the first thing you really want to be able to go around. Many of our customers are doing slow. Let's simply connect our trucks or our vehicles or whatever our IOT asset is. And then you can do very simple things like just performance monitoring of the, of the piece of equipment in the truck industry, a lot of performance monitoring of the truck, but also performance monitoring of the driver. So how has the, how has the driver performing? Is there a lot of idle time spent, um, you know, what's, what's route efficiencies looking like, you know, by connecting the vehicles, right? You get insights, as I said into the truck and into the driver and that's not machine learning. >>Right. But that, that, that monitoring piece is really, really important. The first thing that we see is monitoring types of use cases. Then you start to see companies move towards more of the, uh, what I call the machine learning and AI models, where you're using inference on the edge. And then you start to see things like, uh, predictive maintenance happening, um, kind of route real-time, route optimization and things like that. And you start to see that evolution again, to those smarter, more intelligent dynamic types of decision-making, but let's not, let's not minimize the value of good old fashioned monitoring that site to give you that kind of visibility first, then moving to smarter use cases as you, as you go forward. >>You know, it's interesting. I'm, I'm envisioning when you talked about the monitoring, I'm envisioning a, you see the bumper sticker, you know, how am I driving this all the time? If somebody ever probably causes when they get cut off it's snow and you know, many people might think, oh, it's about big brother, but it's not. I mean, that's yeah. Okay, fine. But it's really about improvement and training and continuous improvement. And then of course the, the route optimization, I mean, that's, that's bottom line business value. So, so that's, I love those, uh, those examples. Um, I wonder, I mean, one of the big hurdles that people should think about when they want to jump into those use cases that you just talked about, what are they going to run into, uh, you know, the blind spots they're, they're going to, they're going to get hit with, >>There's a few different things, right? So first of all, a lot of times your it folks aren't familiar with the kind of the more operational IOT types of data. So just connecting to that type of data can be a new skill set, right? That's very specialized hardware in the car and things like that. And protocols that's number one, that that's the classic, it OT kind of conundrum that, um, you know, uh, many of our customers struggle with, but then more fundamentally is, you know, if you look at the way these types of connected truck or IOT solutions started, you know, oftentimes they were, the first generation were very custom built, right? So they were brittle, right? They were kind of hardwired. And as you move towards, um, more commercial solutions, you had what I call the silo, right? You had fragmentation in terms of this capability from this vendor, this capability from another vendor, you get the idea, you know, one of the things that we really think that we need with that, that needs to be brought to the table is first of all, having an end to end data management platform, that's kind of integrated, it's all tested together. >>You have the data lineage across the entire stack, but then also importantly, to be realistic, we have to be able to integrate to, um, industry kind of best practices as well in terms of, um, solution components in the car, how the hardware and all those types things. So I think there's, you know, it's just stepping back for a second. I think that there is, has been fragmentation and complexity in the past. We're moving towards more standards and more standard types of art, um, offerings. Um, our job as a software maker is to make that easier and connect those dots. So customers don't have to do it all on all on their own. >>And you mentioned specialized hardware. One of the things we heard earlier in the main stage was your partnership with Nvidia. We're talking about, you know, new types of hardware coming in, you guys are optimizing for that. We see the it and the OT worlds blending together, no question. And then that end to end management piece, you know, this is different from your right, from it, normally everything's controlled or the data center, and this is a metadata, you know, rethinking kind of how you manage metadata. Um, so in the spirit of, of what we talked about earlier today, uh, uh, other technology partners, are you working with other partners to sort of accelerate these solutions, move them forward faster? >>Yeah, I'm really glad you're asking that because we actually embarked on a product on a project called project fusion, which really was about integrating with, you know, when you look at that connected vehicle life cycle, there are some core vendors out there that are providing some very important capabilities. So what we did is we joined forces with them to build an end-to-end demonstration and reference architecture to enable the complete data management life cycle. Cloudera is Peter piece of this was ingesting data and all the things I talked about being storing and the machine learning, right? And so we provide that end to end. But what we wanted to do is we wanted to partner with some key partners and the partners that we did with, um, integrate with or NXP NXP provides the service oriented gateways in the car. So that's a hardware in the car when river provides an in-car operating system, that's Linux, right? >>That's hardened and tested. We then ran ours, our, uh, Apache magnify, which is part of flood era data flow in the vehicle, right on that operating system. On that hardware, we pump the data over into the cloud where we did them, all the data analytics and machine learning and, and builds out these very specialized models. And then we used a company called Arabic equity. Once we both those models to do, you know, they specialize in automotive over the air updates, right? So they can then take those models and update those models back to the vehicle very rapidly. So what we said is, look, there's, there's an established, um, you know, uh, ecosystem, if you will, of leaders in this space, what we wanted to do is make sure that our, there was part and parcel of this ecosystem. And by the way, you mentioned Nvidia as well. We're working closely with Nvidia now. So when we're doing the machine learning, we can leverage some of their hardware to get some further acceleration in the machine learning side of things. So, uh, yeah, you know, one of the things I always say about this types of use cases, it does take a village. And what we've really tried to do is build out that, that, uh, an ecosystem that provides that village so that we can speed that analytics and machine learning, um, lifecycle just as fast as it can be. This >>Is again another great example of, of data intensive workloads. It's not your, it's not your grandfather's ERP. That's running on, you know, traditional, you know, systems it's, these are really purpose-built, maybe they're customizable for certain edge use cases. They're low cost, low, low power. They can't be bloated, uh, ended you're right. It does take an ecosystem. You've got to have, you know, API APIs that connect and, and that's that, that takes a lot of work and a lot of thoughts. So that, that leads me to the technologies that are sort of underpinning this we've talked we've we talked a lot in the cube about semiconductor technology, and now that's changing and the advancements we're seeing there, what do you see as the, some of the key technical technology areas that are advancing this connected vehicle machine learning? >>You know, it's interesting, I'm seeing it in a few places, just a few notable ones. I think, first of all, you know, we see that the vehicle itself is getting smarter, right? So when you look at, we look at that NXP type of gateway that we talked about that used to be kind of a, a dumb gateway. That was really all it was doing was pushing data up and down and provided isolation, um, as a gateway down to the, uh, down from the lower level subsistence. So it was really security and just basic, um, you know, basic communication that gateway now is becoming what they call a service oriented gate. So it can run. It's not that it's bad desk. It's got memories that always, so now you could run serious compute in the car, right? So now all of these things like running machine learning, inference models, you have a lot more power in the corner at the same time. >>5g is making it so that you can push data fast enough, making low latency computing available, even on the cloud. So now you now you've got credible compute both at the edge in the vehicle and on the cloud. Right. And, um, you know, and then on the, you know, on the cloud, you've got partners like Nvidia who are accelerating, it's still further through better GPU based compute. So I mean the whole stack, if you look at it, that that machine learning life cycle we talked about, no, David seems like there's improvements and EV every step along the way, we're starting to see technology, um, optimum optimization, um, just pervasive throughout the cycle. >>And then real quick, it's not a quick topic, but you mentioned security. If it was seeing a whole new security model emerge, there is no perimeter anymore in this use case like this is there. >>No there isn't. And one of the things that we're, you know, remember where the data management platform platform and the thing we have to provide is provide end-to-end link, you know, end end-to-end lineage of where that data came from, who can see it, you know, how it changed, right? And that's something that we have integrated into from the beginning of when that data is ingested through, when it's stored through, when it's kind of processed and people are doing machine learning, we provide, we will provide that lineage so that, um, you know, that security and governance is a short throughout the, throughout the data learning life cycle, it >>Federated across in this example, across the fleet. So, all right, Michael, that's all the time we have right now. Thank you so much for that great information. Really appreciate it, >>Dave. Thank you. And thank you. Thanks for the audience for listening in today. Yes. Thank you for watching. >>Okay. We're here in the second manufacturing drill down session with Michael Gerber. He was the managing director for automotive and manufacturing solutions at Cloudera. And we're going to continue the discussion with a look at how to lower costs and drive quality in IOT analytics with better uptime. And look, when you do the math, that's really quite obvious when the system is down, productivity is lost and it hits revenue and the bottom line improve quality drives, better service levels and reduces loss opportunities. Michael. Great to see you >>Take it away. All right. Thank you so much. So I'd say we're going to talk a little bit about connected manufacturing, right. And how those IOT IOT around connected manufacturing can do as Dave talked about improved quality outcomes for manufacturing improve and improve your plant uptime. So just a little bit quick, quick, little indulgent, quick history lesson. I promise to be quick. We've all heard about industry 4.0, right? That is the fourth industrial revolution. And that's really what we're here to talk about today. First industrial revolution, real simple, right? You had steam power, right? You would reduce backbreaking work. Second industrial revolution, massive assembly line. Right. So think about Henry Ford and motorized conveyor belts, mass automation, third industrial revolution. Things got interesting, right? You started to see automation, but that automation was done, essentially programmed a robot to do something. It did the same thing over and over and over irrespective about it, of how your outside operations, your outside conditions change fourth industrial revolution, very different breakfast. >>Now we're connecting, um, equipment and processes and getting feedback from it. And through machine learning, we can make those, um, those processes adaptive right through machine learning. That's really what we're talking about in the fourth industrial revolution. And it is intrinsically connected to data and a data life cycle. And by the way, it's important, not just for a little bit of a slight issue. There we'll issue that, but it's important, not for technology sake, right? It's important because it actually drives and very important business outcomes. First of all, quality, right? If you look at the cost of quality, even despite decades of, of, of, of, uh, companies, um, and manufacturers moving to improve while its quality promise still accounted to 20% of sales, right? So every fifth of what you meant or manufactured from a revenue perspective, you've got quality issues that are costing you a lot. >>Plant downtime, cost companies, $50 billion a year. So when we're talking about using data and these industry 4.0 types of use cases, connected data types of use cases, we're not doing it just merely to implement technology. We're doing it to move these from drivers, improving quality, reducing downtime. So let's talk about how a connected manufacturing data life cycle, what like, right, because this is actually the business that cloud era is, is in. Let's talk a little bit about that. So we call this manufacturing edge to AI, this, this analytics life cycle, and it starts with having your plants, right? Those plants are increasingly connected. As I said, sensor prices have come down two thirds over the last decade, right? And those sensors have connected over the internet. So suddenly we can collect all this data from your, um, ma manufacturing plants. What do we want to be able to do? >>You know, we want to be able to collect it. We want to be able to analyze that data as it's coming across. Right? So, uh, in scream, right, we want to be able to analyze it and take intelligent real-time actions. Right? We might do some simple processing and filtering at the edge, but we really want to take real-time actions on that data. But, and this is the inference part of things, right? Taking the time. But this, the ability to take these real-time actions, um, is actually the result of a machine learning life cycle. I want to walk you through this, right? And it starts with, um, ingesting this data for the first time, putting it into our enterprise data lake, right in that data lake enterprise data lake can be either within your data center or it could be in the cloud. You've got, you're going to ingest that data. >>You're going to store it. You're going to enrich it with enterprise data sources. So now you'll have say sensor data and you'll have maintenance repair orders from your maintenance management systems. Right now you can start to think about do you're getting really nice data sets. You can start to say, Hey, which sensor values correlate to the need for machine maintenance, right? You start to see the data sets. They're becoming very compatible with machine learning, but so you, you bring these data sets together. You process that you align your time series data from your sensors to your timestamp data from your, um, you know, from your enterprise systems that your maintenance management system, as I mentioned, you know, once you've done that, we could put a query layer on top. So now we can start to do advanced analytics query across all these different types of data sets. >>But as I mentioned, you, and what's really important here is the fact that once you've stored long histories that say that you can build out those machine learning models I talked to you about earlier. So like I said, you can start to say, which sensor values drove the need, a correlated to the need for equipment maintenance for my maintenance management systems, right? And you can build out those models and say, Hey, here are the sensor values of the conditions that predict the need for Maples. Once you understand that you can actually then build out those models for deploy the models out the edge, where they will then work in that inference mode that we talked about, I will continuously sniff that data as it's coming and say, Hey, which are the, are we experiencing those conditions that PR that predicted the need for maintenance? If so, let's take real-time action, right? >>Let's schedule a work order or an equipment maintenance work order in the past, let's in the future, let's order the parts ahead of time before that piece of equipment fails and allows us to be very, very proactive. So, you know, we have, this is a, one of the Mo the most popular use cases we're seeing in terms of connecting connected manufacturing. And we're working with many different manufacturers around the world. I want to just highlight. One of them is I thought it's really interesting. This company is bought for Russia, for SIA, for ACA is the, um, is the, was, is the, um, the, uh, a supplier associated with Peugeot central line out of France. They are huge, right? This is a multi-national automotive parts and systems supplier. And as you can see, they operate in 300 sites in 35 countries. So very global, they connected 2000 machines, right. >>Um, and then once be able to take data from that. They started off with learning how to ingest the data. They started off very well with, um, you know, with, uh, manufacturing control towers, right? To be able to just monitor data firms coming in, you know, monitor the process. That was the first step, right. Uh, and, you know, 2000 machines, 300 different variables, things like, um, vibration pressure temperature, right? So first let's do performance monitoring. Then they said, okay, let's start doing machine learning on some of these things to start to build out things like equipment, um, predictive maintenance models or compute. And what they really focused on is computer vision while the inspection. So let's take pictures of, um, parts as they go through a process and then classify what that was this picture associated with the good or bad Bali outcome. Then you teach the machine to make that decision on its own. >>So now, now the machine, the camera is doing the inspections. And so they both had those machine learning models. They took that data, all this data was on-prem, but they pushed that data up to the cloud to do the machine learning models, develop those machine learning models. Then they push the machine learning models back into the plants where they, where they could take real-time actions through these computer vision, quality inspections. So great use case. Um, great example of how you can start with monitoring, moved to machine learning, but at the end of the day, or improving quality and improving, um, uh, equipment uptime. And that is the goal of most manufacturers. So with that being said, um, I would like to say, if you want to learn some more, um, we've got a wealth of information on our website. You see the URL in front of you, please go there and you'll learn. There's a lot of information there in terms of the use cases that we're seeing in manufacturing, a lot more detail, and a lot more talk about a lot more customers we'll work with. If you need that information, please do find it. Um, with that, I'm going to turn it over to Dave, to Steve. I think you had some questions you want to run by. >>I do, Michael, thank you very much for that. And before I get into the questions, I just wanted to sort of make some observations that was, you know, struck by what you're saying about the phases of industry. We talk about industry 4.0, and my observation is that, you know, traditionally, you know, machines have always replaced humans, but it's been around labor and, and the difference with 4.0, and what you talked about with connecting equipment is you're injecting machine intelligence. Now the camera inspection example, and then the machines are taking action, right? That's, that's different and, and is a really new kind of paradigm here. I think the, the second thing that struck me is, you know, the cost, you know, 20% of, of sales and plant downtime costing, you know, many tens of billions of dollars a year. Um, so that was huge. I mean, the business case for this is I'm going to reduce my expected loss quite dramatically. >>And then I think the third point, which we turned in the morning sessions, and the main stage is really this, the world is hybrid. Everybody's trying to figure out hybrid, get hybrid, right. And it certainly applies here. Uh, this is, this is a hybrid world you've got to accommodate, you know, regardless of, of where the data is. You've gotta be able to get to it, blend it, enrich it, and then act on it. So anyway, those are my big, big takeaways. Um, so first question. So in thinking about implementing connected manufacturing initiatives, what are people going to run into? What are the big challenges that they're going to, they're going to hit, >>You know, there's, there's, there, there's a few of the, but I think, you know, one of the ones, uh, w one of the key ones is bridging what we'll call the it and OT data divide, right. And what we mean by the it, you know, your, it systems are the ones, your ERP systems, your MES systems, right? Those are your transactional systems that run on relational databases and your it departments are brilliant, are running on that, right? The difficulty becomes an implementing these use cases that you also have to deal with operational technology, right? And those are, um, all of the, that's all the equipment in your manufacturing plant that runs on its proprietary network with proprietorial pro protocols. That information can be very, very difficult to get to. Right. So, and it's, it's a much more unstructured than from your OT. So th the key challenge is being able to bring these data sets together in a single place where you can start to do advanced analytics and leverage that diverse data to do machine learning. Right? So that is one of the, if I boil it down to the single hardest thing in this, uh, in this, in this type of environment, nectar manufacturing is that that operational technology has kind of run on its own in its own world. And for a long time, the silos, um, uh, the silos a, uh, bound, but at the end of the day, this is incredibly valuable data that now can be tapped, um, um, to, to, to, to move those, those metrics we talked about right around quality and uptime. So a huge, >>Well, and again, this is a hybrid team and you, you've kind of got this world, that's going toward an equilibrium. You've got the OT side and, you know, pretty hardcore engineers. And we know, we know it. A lot of that data historically has been analog data. Now it's getting, you know, instrumented and captured. Uh, so you've got that, that cultural challenge. And, you know, you got to blend those two worlds. That's critical. Okay. So, Michael, let's talk about some of the use cases you touched on, on some, but let's peel the onion a bit when you're thinking about this world of connected manufacturing and analytics in that space, when you talk to customers, you know, what are the most common use cases that you see? >>Yeah, that's a good, that's a great question. And you're right. I did allude to a little bit earlier, but there really is. I want people to think about, there's a spectrum of use cases ranging from simple to complex, but you can get value even in the simple phases. And when I talk about the simple use cases, the simplest use cases really is really around monitoring, right? So in this, you monitor your equipment or monitor your processes, right? And you just make sure that you're staying within the bounds of your control plan, right. And this is much easier to do now. Right? Cause some of these sensors are a more sensors and those sensors are moving more and more towards internet types of technology. So, Hey, you've got the opportunity now to be able to do some monitoring. Okay. No machine learning, but just talking about simple monitoring next level down, and we're seeing is something we would call quality event forensic analysis. >>And now on this one, you say, imagine I've got warranty plans in the, in the field, right? So I'm starting to see warranty claims kick up. And what you simply want to be able to do is do the forensic analysis back to what was the root cause of within the manufacturing process that caused it. So this is about connecting the dots. What about warranty issues? What were the manufacturing conditions of the day that caused it? Then you could also say which other tech, which other products were impacted by those same conditions. And we call those proactively rather than, and, and selectively rather than say, um, recalling an entire year's fleet of the car. So, and that, again, also not machine learning, we're simply connecting the dots from a warranty claims in the field to the manufacturing conditions of the day, so that you could take corrective actions, but then you get into a whole slew of machine learning, use dates, you know, and that ranges from things like Wally or say yield optimization. >>We start to collect sensor values and, um, manufacturing yield, uh, values from your ERP system. And you're certain start to say, which, um, you know, which on a sensor values or factors drove good or bad yield outcomes, and you can identify those factors that are the most important. So you, um, you, you measure those, you monitor those and you optimize those, right. That's how you optimize your, and then you go down to the more traditional machine learning use cases around predictive maintenance. So the key point here, Dave is, look, there's a huge, you know, depending on a customer's maturity around big data, you could start simply with, with monitoring, get a lot of value, start then bringing together more diverse data sets to do things like connect the.analytics then and all the way then to, to, to the more advanced machine learning use cases, there's this value to be had throughout. >>I remember when the, you know, the it industry really started to think about, or in the early days, you know, IOT and IOT. Um, it reminds me of when, you know, there was, uh, the, the old days of football field, we were grass and, and the new player would come in and he'd be perfectly white uniform, and you had it. We had to get dirty as an industry, you know, it'll learn. And so, so I question it relates to other technology partners that you might be working with that are maybe new in this space that, that to accelerate some of these solutions that we've been talking about. >>Yeah. That's a great question. And it kind of goes back to one of the things I alluded to alluded upon earlier. We've had some great partners, a partner, for example, litmus automation, whose whole world is the OT world. And what they've done is for example, they built some adapters to be able to catch it practically every industrial protocol. And they've said, Hey, we can do that. And then give a single interface of that data to the Idera data platform. So now, you know, we're really good at ingesting it data and things like that. We can leverage say a company like litmus that can open the flood gates of that OT data, making it much easier to get that data into our platform. And suddenly you've got all the data you need to, to, to implement those types of, um, industry for porno, our analytics use cases. And it really boils down to, can I get to that? Can I break down that it OT, um, you know, uh, a barrier that we've always had and, and bring together those data sets that we can really move the needle in terms of improving manufacturing performance. >>Okay. Thank you for that last question. Speaking to moving the needle, I want to li lead this discussion on the technology advances. I'd love to talk tech here. Uh, what are the key technology enablers and advancers, if you will, that are going to move connected manufacturing and machine learning forward in this transportation space. Sorry, manufacturing. Yeah. >>Yeah. I know in the manufacturing space, there's a few things, first of all, I think the fact that obviously I know we touched upon this, the fact that sensor prices have come down and have become ubiquitous that number one, we can, we've finally been able to get to the OT data, right? That's that's number one, you know, numb number two, I think, you know, um, we, we have the ability that now to be able to store that data a whole lot more efficiently, you know, we've got, we've got great capabilities to be able to do that, to put it over into the cloud, to do the machine learning types of workloads. You've got things like if you're doing computer vision, while in analyst respect GPU's to make those machine learning models much more, uh, much more effective, if that 5g technology that starts to blur at least from a latency perspective where you do your computer, whether it be on the edge or in the cloud, you've, you've got more, the super business critical stuff. >>You probably don't want to rely on, uh, any type of network connection, but from a latency perspective, you're starting to see, uh, you know, the ability to do compute where it's the most effective now. And that's really important. And again, the machine learning capabilities, and they believed a book to build a GP, you know, GPU level machine learning, build out those models and then deployed by over the air updates to, to your equipment. All of those things are making this, um, there's, you know, the advanced analytics and machine learning, uh, data life cycle just faster and better. And at the end of the day, to your point, Dave, that equipment and processor getting much smarter, uh, very much more quickly. Yeah, we got >>A lot of data and we have way lower cost, uh, processing platforms I'll throw in NP use as well. Watch that space neural processing units. Okay. Michael, we're going to leave it there. Thank you so much. Really appreciate your time, >>Dave. I really appreciate it. And thanks. Thanks for, uh, for everybody who joined us. Thanks. Thanks for joining today. Yes. Thank you for watching. Keep it right there.
SUMMARY :
Michael, great to see you over to you. And if you look at the F the top fleet operator, uh, the top benefits that So, you know, one of the things that's really important to be able to enable this right, And by the way, once you conducted the, um, this information from the trucks, you want to be able to analyze And you want to augment that with your dealership, say service information. So what do you do here? And they started off, uh, by, um, by, you know, connecting 475,000 And you know, the point here is with that, they were centralizing data from their telematics service providers, many of our, um, um, you know, um, uh, truck manufacturers. And you can push that back to the edge. And then you can do very simple things like just performance monitoring And then you start to see things like, uh, predictive maintenance happening, uh, you know, the blind spots they're, they're going to, they're going to get hit with, it OT kind of conundrum that, um, you know, So I think there's, you know, it's just stepping back for a second. the data center, and this is a metadata, you know, rethinking kind of how you manage metadata. with, you know, when you look at that connected vehicle life cycle, there are some core vendors And by the way, you mentioned Nvidia as well. and now that's changing and the advancements we're seeing there, what do you see as the, um, you know, basic communication that gateway now is becoming um, you know, and then on the, you know, on the cloud, you've got partners like Nvidia who are accelerating, And then real quick, it's not a quick topic, but you mentioned security. And one of the things that we're, you know, remember where the data management Thank you so much for that great information. Thank you for watching. And look, when you do the math, that's really quite obvious when the system is down, productivity is lost and it hits Thank you so much. So every fifth of what you meant or manufactured from a revenue So we call this manufacturing edge to AI, I want to walk you through this, um, you know, from your enterprise systems that your maintenance management system, And you can build out those models and say, Hey, here are the sensor values of the conditions And as you can see, they operate in 300 sites in They started off very well with, um, you know, great example of how you can start with monitoring, moved to machine learning, I think the, the second thing that struck me is, you know, the cost, you know, 20% of, And then I think the third point, which we turned in the morning sessions, and the main stage is really this, And what we mean by the it, you know, your, it systems are the ones, You've got the OT side and, you know, pretty hardcore engineers. And you just make sure that you're staying within the bounds of your control plan, And now on this one, you say, imagine I've got warranty plans in the, in the field, look, there's a huge, you know, depending on a customer's maturity around big data, I remember when the, you know, the it industry really started to think about, or in the early days, you know, uh, a barrier that we've always had and, if you will, that are going to move connected manufacturing and machine learning forward that starts to blur at least from a latency perspective where you do your computer, and they believed a book to build a GP, you know, GPU level machine learning, Thank you so much. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Michael Gerber | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
300 sites | QUANTITY | 0.99+ |
12 cents | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
2000 machines | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
2000 machines | QUANTITY | 0.99+ |
Peugeot | ORGANIZATION | 0.99+ |
Navistar | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
150% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
35 countries | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
475,000 trucks | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
NXP | ORGANIZATION | 0.99+ |
Russia | LOCATION | 0.99+ |
single | QUANTITY | 0.99+ |
first session | QUANTITY | 0.98+ |
third point | QUANTITY | 0.98+ |
SIA | ORGANIZATION | 0.98+ |
Linux | TITLE | 0.98+ |
3 cents a mile | QUANTITY | 0.98+ |
decades | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
litmus | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
15 cents a mile | QUANTITY | 0.97+ |
one day | QUANTITY | 0.97+ |
300 different variables | QUANTITY | 0.97+ |
ACA | ORGANIZATION | 0.96+ |
cloudera.com | OTHER | 0.96+ |
two things | QUANTITY | 0.95+ |
fourth industrial revolution | EVENT | 0.95+ |
Mendez | PERSON | 0.95+ |
two worlds | QUANTITY | 0.94+ |
$50 billion a year | QUANTITY | 0.94+ |
almost 80% | QUANTITY | 0.94+ |
fourth industrial revolution | EVENT | 0.93+ |
Idera | ORGANIZATION | 0.93+ |
First industrial revolution | QUANTITY | 0.93+ |
two thirds | QUANTITY | 0.92+ |
4.0 types | QUANTITY | 0.92+ |
Manufacturing Reduce Costs and Improve Quality with IoT Analytics
>>Okay. We're here in the second manufacturing drill down session with Michael Gerber. He was the managing director for automotive and manufacturing solutions at Cloudera. And we're going to continue the discussion with a look at how to lower costs and drive quality in IOT analytics with better uptime and hook. When you do the math, that's really quite obvious when the system is down, productivity is lost and it hits revenue and the bottom line improve quality drives, better service levels and reduces lost opportunities. Michael. Great to see you, >>Dave. All right, guys. Thank you so much. So I'll tell you, we're going to talk a little bit about connected manufacturing, right? And how those IOT IOT around connected manufacturing can do as Dave talked about improved quality outcomes for manufacturing improve and improve your plant uptime. So just a little bit quick, quick, little indulgent, quick history lesson. I promise to be quick. We've all heard about industry 4.0, right? That is the fourth industrial revolution. And that's really what we're here to talk about today. First industrial revolution, real simple, right? You had steam power, right? You would reduce backbreaking work. Second industrial revolution, mass assembly line. Right. So think about Henry Ford and motorized conveyor belts, mass automation, third industrial revolution. Things got interesting, right? You started to see automation, but that automation was done essentially programmed a robot to do something. It did the same thing over and over and over irrespective about of how your outside operations, your outside conditions change fourth industrial revolution, very different breakfasts. >>Now we're connecting, um, equipment and processes and getting feedback from it. And through machine learning, we can make those, um, those processes adapted right through machine learning. That's really what we're talking about in the fourth industrial revolution. And it is intrinsically connected to data and a data life cycle. And by the way, it's important, not just for a little bit of a slight issue. There we'll issue that, but it's important, not for technology sake, right? It's important because it actually drives very important business outcomes. First of all, falling, right? If you look at the cost of quality, even despite decades of, of, uh, companies and manufacturers moving to improve while its quality prompts still account to 20% of sales, right? So every fifth of what you meant or manufactured from a revenue perspective, you've got quality issues that are costing you a lot. Plant downtime, cost companies, $50 billion a year. >>So when we're talking about using data and these industry 4.0 types of use cases, connected data types of use cases, we're not doing it just narrowly to implement technology. We're doing it to move these from adverse, improving quality, reducing downtime. So let's talk about how a connected manufacturing data life cycle with what like, right. But so this is actually the business that cloud areas is in. Let's talk a little bit about that. So we call this manufacturing edge to AI. This is analytics, life something, and it starts with having your plants, right? Those plants are increasingly connected. As I said, sensor prices have come down two thirds over the last decade, right? And those sensors are connected over the internet. So suddenly we can collect all this data from your, um, manufacturing plants, and what do we want to be able to do? You know, we want to be able to collect it. >>We want to be able to analyze that data as it's coming across. Right? So, uh, in scream, right, we want to be able to analyze it and take intelligent real-time actions. Right? We might do some simple processing and filtering at the edge, but we really want to take real-time actions on that data. But, and this is the inference part of things, right? Taking that time. But this, the ability to take these real-time actions, um, is actually the result of a machine learning life cycle. I want to walk you through this, right? And it starts with, um, ingesting this data for the first time, putting it into our enterprise data lake, right? And that data lake enterprise data lake can be either within your data center or it could be in the cloud. You're going to, you're going to ingest that data. You're going to store it. >>You're going to enrich it with enterprise data sources. So now you'll have say sensor data and you'll have maintenance repair orders from your maintenance management systems. Right now you can start to think about do you're getting really nice data sets. You can start to say, Hey, which sensor values correlate to the need for machine maintenance, right? You start to see the data sets. They're becoming very compatible with machine learning, but so you bring these datasets together. You process that you align your time series data from your sensors to your timestamp data from your, um, you know, from your enterprise systems that your maintenance management system, as I mentioned, you know, once you've done that, we could put a query layer on top. So now we can start to do advanced analytics query across all these different types of data sets. But as I mentioned to you, and what's really important here is the fact that once you've stored one histories that say that you can build out those machine learning models I talked to you about earlier. >>So like I said, you can start to say, which sensor values drove the need of correlated to the need for equipment maintenance for my maintenance management systems, right? And then you can build out those models and say, Hey, here are the sensor values of the conditions that predict the need for maintenance. And once you understand that you can actually then build out those models, you deploy the models out to the edge where they will then work in that inference mode, that photographer, I will continuously sniff that data as it's coming and say, Hey, which are the, are we experiencing those conditions that, that predicted the need for maintenance? If so, let's take real-time action, right? Let's schedule a work order and equipment maintenance work order in the past, let's in the future, let's order the parts ahead of time before that a piece of equipment fails and allows us to be very, very proactive. >>So, you know, we have, this is a, one of the Mo the most popular use cases we're seeing in terms of connected, connected manufacturing. And we're working with many different, um, manufacturers around the world. I want to just highlight one of them. Cause I thought it's really interesting. This company is bought for Russia. And for SIA for ACA is the, um, is the, is the, um, the, uh, a supplier associated with out of France. They are huge, right? This is a multi-national automotive, um, parts and systems supplier. And as you can see, they operate in 300 sites in 35 countries. So very global, they connected 2000 machines, right. Um, I mean at once be able to take data from that. They started off with learning how to ingest the data. They started off very well with, um, you know, with, uh, manufacturing control towers, right? >>To be able to just monitor the data from coming in, you know, monitor the process. That was the first step, right. Uh, and you know, 2000 machines, 300 different variables, things like, um, vibration pressure temperature, right? So first let's do performance monitoring. Then they said, okay, let's start doing machine learning on some of these things, just start to build out things like equipment, um, predictive maintenance models, or compute. What they really focused on is computer vision while the inspection. So let's take pictures of, um, parts as they go through a process and then classify what that was this picture associated with the good or bad quality outcome. Then you teach the machine to make that decision on its own. So now, now the machine, the camera is doing the inspections for you. And so they both had those machine learning models. They took that data, all this data was on-prem, but they pushed that data up to the cloud to do the machine learning models, develop those machine learning models. >>Then they push the machine learning models back into the plants where they, where they could take real-time actions through these computer vision, quality inspections. So great use case. Um, great example of how you start with monitoring, move to machine learning, but at the end of the day, or improving quality and improving, um, uh, equipment uptime. And that is the goal of most manufacturers. So with that being said, um, I would like to say, if you want to learn some more, um, we've got a wealth of information on our website. You see the URL in front of you, please go, then you'll learn. There's a lot of information there in terms of the use cases that we're seeing in manufacturing and a lot more detail and a lot more talk about a lot more customers we'll work with. If you need that information, please do find it. Um, with that, I'm going to turn it over to Dave, to Steve. I think you had some questions you want to run by. >>I do, Michael, thank you very much for that. And before I get into the questions, I just wanted to sort of make some observations that was, you know, struck by what you're saying about the phases of industry. We talk about industry 4.0, and my observation is that, you know, traditionally, you know, machines have always replaced humans, but it's been around labor and, and the difference with 4.0, and what you talked about with connecting equipment is you're injecting machine intelligence. Now the camera inspection example, and then the machines are taking action, right? That's, that's different and, and is a really new kind of paradigm here. I think the, the second thing that struck me is, you know, the costs, you know, 20% of, of sales and plant downtime costing, you know, many tens of billions of dollars a year. Um, so that was huge. I mean, the business case for this is I'm going to reduce my expected loss quite dramatically. >>And then I think the third point, which we turned in the morning sessions, and the main stage is really this, the world is hybrid. Everybody's trying to figure out hybrid, get hybrid, right. And it certainly applies here. Uh, this is, this is a hybrid world you've got to accommodate, you know, regardless of where the data is, you've got to be able to get to it, blend it, enrich it, and then act on it. So anyway, those are my big, big takeaways. Um, so first question. So in thinking about implementing connected manufacturing initiatives, what are people going to run into? What are the big challenges that they're going to, they're going to hit? >>No, there's, there's there, there's a few of the, but I think, you know, one of the, uh, one of the key ones is bridging what we'll call the it and OT data divide, right. And what we mean by the it, you know, your, it systems are the ones, your ERP systems, your MES system, Freightos your transactional systems that run on relational databases and your it departments are brilliant at running on that, right? The difficulty becomes an implementing these use cases that you also have to deal with operational technology, right? And those are all of the, that's all the equipment in your manufacturing plant that runs on its proprietary network with proprietary pro protocols. That information can be very, very difficult to get to. Right? So, and it's uncertain, it's a much more unstructured than from your OT. So the key challenge is being able to bring these data sets together in a single place where you can start to do advanced analytics and leverage that diverse data to do machine learning. Right? So that is one of the, if I had to boil it down to the single hardest thing in this, uh, in this, in this type of environment, nectar manufacturing is that that operational technology has kind of run on its own in its own. And for a long time, the silos, the silos, a bound, but at the end of the day, this is incredibly valuable data that now can be tapped, um, um, to, to, to, to move those, those metrics we talked about right around quality and uptime. So a huge opportunity. >>Well, and again, this is a hybrid team and you, you've kind of got this world, that's going toward an equilibrium. You've got the OT side and, you know, pretty hardcore engineers. And we know, we know it. A lot of that data historically has been analog data. This is Chris now is getting, you know, instrumented and captured. Uh, and so you've got that, that cultural challenge and, you know, you got to blend those two worlds. That's critical. Okay. So Michael, let's talk about some of the use cases you touched on, on some, but let's peel the onion a bit when you're thinking about this world of connected manufacturing and analytics in that space, when you talk to customers, you know, what are the most common use cases that you see? >>Yeah, that's a great, that's a great question. And you're right. I did allude to a little bit earlier, but there really is. I want people to think about this, a spectrum of use cases ranging from simple to complex, but you can get value even in the simple phases. And when I talk about the simple use cases, the simplest use cases really is really around monitoring, right? So in this, you monitor your equipment or monitor your processes, right? And you just make sure that you're staying within the bounds of your control plan, right? And this is much easier to do now. Right? Cause some of these sensors are a more sensors and those sensors are moving more and more towards the internet types of technology. So, Hey, you've got the opportunity now to be able to do some monitoring. Okay. No machine learning, we're just talking about simple monitoring next level down. >>And we're seeing is something we would call quality event forensic announces. And now on this one, you say, imagine I'm got warranty plans in the, in the field, right? So I'm starting to see warranty claims kicked off on them. And what you simply want to be able to do is do the forensic analysis back to what was the root cause of within the manufacturing process that caused it. So this is about connecting the dots I've got, I've got warranty issues. What were the manufacturing conditions of the day that caused it? Then you could also say which other, which other products were impacted by those same conditions. And we call those proactively rather than, and, and selectively rather than say, um, recalling an entire year's fleet of a car. So, and that, again, also not machine learning is simply connecting the dots from a warranty claims in the field to the manufacturing conditions of the day so that you could take corrective actions, but then you get into a whole slew of machine learning use case, you know, and, and that ranges from things like quality or say yield optimization, where you start to collect sensor values and, um, manufacturing yield, uh, values from your ERP system. >>And you're certain start to say, which, um, you know, which map a sensor values or factors drove good or bad yield outcomes. And you can identify those factors that are the most important. So you, um, you, you measure those, you monitor those and you optimize those, right. That's how you optimize your, and then you go down to the more traditional machine learning use cases around predictive maintenance. So the key point here, Dave is, look, there's a huge, you know, depending on a customer's maturity around big data, you could start simply with monitoring, get a lot of value, start, then bring together more diverse datasets to do things like connect the.analytics then all and all the way then to, to, to the more advanced machine learning use cases this value to be had throughout. >>I remember when the, you know, the it industry really started to think about, or in the early days, you know, IOT and IOT. Um, it reminds me of when, you know, there was, uh, the, the old days of football field, we were grass and, and a new player would come in and he'd be perfectly white uniform and you had it. We had to get dirty as an industry, you know, it'll learn. And so, so my question relates to other technology partners that you might be working with that are maybe new in this space that, that to accelerate some of these solutions that we've been talking about. >>Yeah. That's a great question. I kind of, um, goes back to one of the things I alluded a little bit about earlier. We've got some great partners, a partner, for example, litmus automation, whose whole world is the OT world. And what they've done is for example, they built some adapters to be able to get to practically every industrial protocol. And they've said, Hey, we can do that. And then give a single interface of that data to the Idera data platform. So now, you know, we're really good at ingesting it data and things like that. We can leverage say a company like litmus that can open the flood gates of that OT data, making it much easier to get that data into our platform. And suddenly you've got all the data you need to, to implement those types of, um, industry 4.0, uh, analytics use cases. And it really boils down to, can I get to that? Can I break down that it OT, um, you know, uh, uh, barrier that we've always had and, and bring together those data sets that really move the needle in terms of improving manufacturing performance. >>Okay. Thank you for that last question. Speaking to moving the needle, I want to Lee lead this discussion on the technology advances. I'd love to talk tech here. Uh, what are the key technology enablers and advancers, if you will, that are going to move connected manufacturing and machine learning forward in this transportation space. Sorry. Manufacturing in >>Factor space. Yeah, I know in the manufacturing space, there's a few things, first of all, I think the fact that obviously I know we touched upon this, the fact that sensor prices have come down and it had become ubiquitous that number one, we can w we're finally been able to get to the OT data, right? That's that's number one, number, number two, I think, you know, um, we, we have the ability that now to be able to store that data a whole lot more efficiently, you know, we've got, we've got great capabilities to be able to do that, to put it over into the cloud, to do the machine learning types of workloads. You've got things like if you're doing computer vision, while in analyst respect GPU's to make those machine learning models much more, um, much more effective, if that 5g technology that starts to blur at least from a latency perspective where you do your computer, whether it be on the edge or in the cloud, you've, you've got more, you know, super business critical stuff. >>You probably don't want to rely on, uh, any type of network connection, but from a latency perspective, you're starting to see, uh, you know, the ability to do compute where it's the most effective now. And that's really important. And again, the machine learning capabilities, and they believed the book, bullet, uh, GP, you know, GPU level, machine learning, all that, those models, and then deployed by over the air updates to your equipment. All of those things are making this, um, there's, you know, there's the advanced analytics and machine learning, uh, data life cycle just faster and better. And at the end of the day, to your point, Dave, that equipment and processes are getting much smarter, uh, very much more quickly. >>Yep. We've got a lot of data and we have way lower costs, uh, processing platforms I'll throw in NP use as well. Watch that space neural processing units. Okay. Michael, we're going to leave it there. Thank you so much. Really appreciate your time, >>Dave. I really appreciate it. And thanks. Thanks for, uh, for everybody who joined. Uh, thanks. Thanks for joining today. Yes. Thank you for watching. Keep it right there.
SUMMARY :
When you do the math, that's really quite obvious when the system is down, productivity is lost and it hits revenue and the bottom Thank you so much. So every fifth of what you meant or manufactured from a revenue perspective, And those sensors are connected over the internet. I want to walk you through those machine learning models I talked to you about earlier. And then you can build out those models and say, Hey, here are the sensor values of the conditions And as you can see, they operate in 300 sites To be able to just monitor the data from coming in, you know, monitor the process. And that is the goal of most manufacturers. I think the, the second thing that struck me is, you know, the costs, you know, 20% of, And then I think the third point, which we turned in the morning sessions, and the main stage is really this, And what we mean by the it, you know, your, it systems are the ones, So Michael, let's talk about some of the use cases you touched on, on some, And you just make sure that you're staying within the bounds of your control plan, And now on this one, you say, imagine I'm got warranty plans in the, in the field, And you can identify those factors that I remember when the, you know, the it industry really started to think about, or in the early days, litmus that can open the flood gates of that OT data, making it much easier to if you will, that are going to move connected manufacturing and machine learning forward that data a whole lot more efficiently, you know, we've got, we've got great capabilities to be able to do that, And at the end of the day, to your point, Dave, that equipment and processes are getting much smarter, Thank you so much. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Michael Gerber | PERSON | 0.99+ |
300 sites | QUANTITY | 0.99+ |
2000 machines | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
2000 machines | QUANTITY | 0.99+ |
Russia | LOCATION | 0.99+ |
third point | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
35 countries | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
300 different variables | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Henry Ford | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Second | QUANTITY | 0.97+ |
two thirds | QUANTITY | 0.96+ |
litmus | ORGANIZATION | 0.96+ |
First | QUANTITY | 0.96+ |
ACA | ORGANIZATION | 0.95+ |
decades | QUANTITY | 0.95+ |
two worlds | QUANTITY | 0.94+ |
single | QUANTITY | 0.94+ |
fourth industrial revolution | EVENT | 0.93+ |
second thing | QUANTITY | 0.93+ |
Lee | PERSON | 0.92+ |
single interface | QUANTITY | 0.92+ |
last decade | DATE | 0.92+ |
single place | QUANTITY | 0.92+ |
fourth industrial revolution | EVENT | 0.92+ |
Idera | ORGANIZATION | 0.91+ |
tens of billions of dollars a year | QUANTITY | 0.88+ |
$50 billion a year | QUANTITY | 0.84+ |
industrial revolution | EVENT | 0.82+ |
20% of sales | QUANTITY | 0.81+ |
fifth | QUANTITY | 0.75+ |
4.0 types | QUANTITY | 0.72+ |
third | QUANTITY | 0.61+ |
lake | ORGANIZATION | 0.61+ |
Analytics | TITLE | 0.47+ |
two | QUANTITY | 0.38+ |
one | OTHER | 0.34+ |
Simon Crosby Dirty | Cube On Cloud
>> Hi, I'm Stu Miniman, and welcome back to theCUBE on Cloud talking about really important topics as to how developers, were changing how they build their applications, where they live, of course, long discussion we've had for a number of years. You know, how do things change in hybrid environments? We've been talking for years, public cloud and private cloud, and really excited for this session. We're going to talk about how edge environment and AI impact that. So happy to welcome back one of our CUBE alumni, Simon Crosby, is currently the Chief Technology Officer with Swim. He's got plenty of viewpoints on AI, the edge and knows the developer world well. Simon, welcome back. Thanks so much for joining us. >> Thank you, Stu, for having me. >> All right, so let's start for a second. Let's talk about developers. You know, it used to be, you know, for years we talked about, you know, what's the level of abstraction we get. Does it sit, you know, do I put it on bare metal? Do I virtualize it? Do I containerize it? Do I make it serverless? A lot of those things, you know that the app developer doesn't want to even think about but location matters a whole lot when we're talking about things like AI where do I have all my data that I could do my training? Where do I actually have to do the processing? And of course, edge just changes by orders of magnitude. Some of the things like latency and where data lives and everything like that. So with that as a setup, would love to get just your framework as to what you're hearing from developers and what we'll get into some of the solutions that you and your team are helping them to do their jobs. >> Well, you're absolutely right, Stu. The data onslaught is very real. Companies that I deal with are facing more and more real-time data from products from their infrastructure, from their partners whatever it happens to be and they need to make decisions rapidly. And the problem that they're facing is that traditional ways of processing that data are too slow. So perhaps the big data approach, which by now is a bit old, it's a bit long in the tooth, where you store data and then you analyze it later, is problematic. First of all, data streams are boundless. So you don't really know when to analyze, but second you can't store it all. And so the store then analyze approach has to change and Swim is trying to do something about this by adopting a process of analyze on the fly, so as data is generated, as you receive events you don't bother to store them. You analyze them, and then if you have to, you store the data, but you need to analyze as you receive data and react immediately to be able to generate reasonable insights or predictions that can drive commerce and decisions in the real world. >> Yeah absolutely. I remember back in the early days of big data, you know, real time got thrown around a little but it was usually I need to react fast enough to make sure we don't lose the customer, react to something, but it was, we gather all the data and let's move compute to the data. Today as you talk about, you know, real time streams are so important. We've been talking about observability for the last couple of years to just really understand the systems and the outputs more than looking back historically at where things were waiting for alerts. So could you give us some examples if you would, as to you know, those streams, you know, what is so important about being able to interact and leverage that data when you need it? And boy, it's great if we can use it then and not have to store it and think about it later, obviously there's some benefits there, because-- >> Well every product nowadays has a CPU, right? And so there's more and more data. And just let me give you an example, Swim processes real-time data from more than a hundred million mobile devices in real time, for a mobile operator. And what we're doing there is we're optimizing connection quality between devices and the network. Now that volume of data is more than four petabytes per day, okay. Now there is simply no way you can ever store that and analyze it later. The interesting thing about this is that if you adopt and analyze, and then if you really have to store architecture, you get to take advantage of Moore's Law. So you're running at CPU memory speeds instead of at disk speed. And so that gives you a million fold speed up, and it also means you don't have the latency problem of reaching out to, or about storage, database, or whatever. And so that reduces costs. So we can do it on about 10% of the infrastructure that they previously had for Hadoop style implementation. >> So, maybe it would help if we just explain. When we say edge people think of a lot of different things, is it, you know an IOT device sitting out at the edge? Are we talking about the Telecom edge? We've been watching AWS for years, you know, spider out their services and into various environments. So when you talk about the type of solutions you're doing and what your customers have, is it the Telecom edge? Is it the actual device edge, you know, where does processing happen and where do these you know, services that work on it live? >> So I think the right way to think about edge is where can you reasonably process the data? And it obviously makes sense to process data at the first opportunity you have, but much data is encrypted between the original device, say, and the application. And so edge as a place doesn't make as much sense as edge as an opportunity to decrypt and analyze data in the clear. So edge computing is not so much a place in my view as the first opportunity you have to process data in the clear and to make sense of it. And then edge makes sense, in terms of latency, by locating, compute, as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users, you know, quickly. So edge for me often is the cloud. >> Excellent, one of the other things I think about back from, you know, the big data days or even earlier, it was that how long it took to get from the raw data to processing that data, to be able to getting some insight, and then being able to take action. It sure sounds like we're trying to collapse that completely, is that, you know, how do we do that? You know, can we actually, you know, build the system so that we can, you know, in that real time, continuous model that you talk about, you know. Take care of it and move on. >> So one of the wonderful things, one of the wonderful things about cloud computing is that two major abstractions have really served us. And those are rest, which is static disk computing, and databases. And rest means any old server can do the job for me and then the database is just an API call away. The problem with that is that it's desperately slow. So when I say desperately slow, I mean, it's probably thrown away the last 10 years of Moore's law. Just think about it this way. Your CPU runs at gigahertz and the network runs at milliseconds. So by definition, every time you reach out to a data store you're going a million times slower than your CPU. That's terrible. It's absolutely tragic, okay. So a model which is much more effective is to have an in-memory computer architecture in which you engage in staple computation. So instead of having to reach out to a database every time to update the database and whatever, you know, store something, and then fetch it again a few moments later when the next event arrives, you keep state in memory and you compute on the fly as data arrives. And that way you get a million times speed up. You also end up with this tremendous cost reduction because you don't end up with as many instances having to compute, by comparison. So let me give you a quick example. If you go to a traffic.swim.ai you can see the real time state of the traffic infrastructure in Palo Alto. And each one of those intersections is predicting its own future. Now, the volume of data from just a few hundred lights in Palo Alto is about four terabytes a day. And sure you can deal with this in AWS Lambda. There are lots and lots of servers up there. But the problem is that the end to end per event latency is about 100 milliseconds. And, you know, if I'm dealing with 30,000 events a second, that's just too much. So solving that problem with a stateless architecture is extraordinarily expensive, more than $5,000 a month. Whereas the staple architecture which you could think of as an evolution of, you know, something reactive or the actor model, gets you, you know something like a 10th of the cost, okay. So cloud is fabulous for things that need to scale wide but a staple model is required for dealing with things which update you rapidly or regularly about their changes in state. >> Yeah, absolutely. You know, I think about if, I mentioned before AI training models, often, if you look at something like autonomous vehicles, the massive amounts of data that it needs to process, you know, has to happen in the public cloud. But then that gets pushed back down to the end device, in this case it's a car, because it needs to be able to react in real time and gets fed at a regular update, the new training algorithms that it has there. What are you seeing-- >> I have strong reason on this training approach and data science in general, and that is that there aren't enough data scientists or, you know, smart people to train these algorithms, deploy them to the edge and so on. And so there is an alternative worldview which is a much simpler one and that is that relatively simple algorithms deployed at scale to staple representatives, let's call them digital twins of things, can deliver enormous improvements in behavior as things learn for themselves. So the way I think the, at least this edge world, gets smarter is that relatively simple models of things will learn for themselves, create their own futures, based on what they can see and then react. And so this idea that we have lots and lots of data scientists dealing with vast amounts of information in the cloud is suitable for certain algorithms but it doesn't work for the vast majority of applications. >> So where are we with the state of what, what do developers need to think about? You mentioned that there's compute in most devices. That's true, but, you know, do they need some special Nvidia chip set out there? Are there certain programming languages that you are seeing more prevalent, interoperability, give us a little bit of, you know, some tips and tricks for those developing. >> Super, so number one, a staple architecture is fundamental and sure React is well known and there are ACA for example, and Spurling. Swim is another so I'm going to use some language and I would encourage you to look at swimos.org to go from play there. A staple architecture, which allows actors, small concurrent objects to stapely evolve their own state based on updates from the real world is fundamental. By the way, in Swim we use data to build these models. So these little agents, for things, we call them web agents because the object ID is a URI, they stapley evolve by processing their own real-world data, stapley representing it, And then they do this wonderful thing which is build a model on the fly. And they build a model by linking to things that they're related to. So a need section would link to all of its sensors but it would also link to all of its neighbors because the neighbors and linking is like a sub in Pub/Sub, and it allows that web agent then to continually analyze, learn, and predict on the fly. And so every one of these concurrent objects is doing this job of analyzing its own raw data and then predicting from that and streaming the result. So in Swim, you get streamed raw data in and what streams out is predictions, predictions about the future state of the infrastructure. And that's a very powerful staple approach which can run all their memory, no storage required. By the way, it's still persistent, so if you lose a node, you can just come back up and carry on but there's no need to store huge amounts of raw data if you don't need it. And let me just be clear. The volumes of raw data from the real world are staggering, right? So four terabytes a day from Palo Alto, but Las Vegas about 60 terabytes a day from the traffic lights. More than 100 million mobile devices is tens of petabytes per day, which is just too much to store. >> Well, Simon, you've mentioned that we have a shortage when it comes to data scientists and the people that can be involved in those things. How about from the developers side, do most enterprises that you're talking to do they have the skillset? Is the ecosystem mature enough for the company to get involved? What do we need to do looking forward to help companies be able to take advantage of this opportunity? >> Yeah, so there is this huge challenge in terms of, I guess, just cloud native skills. And this is exacerbated the more you get added to. I guess what you could think of is traditional kind of companies, all of whom have tons and tons of data sources. So we need to make it easy and Swim tries to do this by effectively using skills that people already have, Java or JavaScript, and giving them easy ways to develop, deploy, and then run applications without thinking about them. So instead of binding developers to notions of place and where databases are and all that sort of stuff if they can write simple object-oriented programs about things like intersections and push buttons, and pedestrian lights, and inroad loops and so on, and simply relate basic objects in the world to each other then we let data build the model by essentially creating these little concurrent objects for each thing, and they will then link to each other and solve the problem. We end up solving a huge problem for developers too, which is that they don't need to acquire complicated cloud-native skillsets to get to work. >> Well absolutely, Simon, it's something we've been trying to do for a long time is to truly simplify things. Want to let you have the final word. If you look out there, the opportunity, the challenge in the space, what final takeaways would you give to our audience? >> So very simple. If you adopt a staple competing architecture, like Swim, you get to go a million times faster. The applications always have an answer. They analyze, learn and predict on the fly and they go a million times faster. They use 10% less, no, sorry, 10% of the infrastructure of a store than analyze approach. And it's the way of the future. >> Simon Crosby, thanks so much for sharing. Great having you on the program. >> Thank you, Stu. >> And thank you for joining I'm Stu Miniman, thank you, as always, for watching theCUBE.
SUMMARY :
So happy to welcome back that you and your team and then you analyze it and leverage that data when you need it? And so that gives you a Is it the actual device edge, you know, at the first opportunity you have, so that we can, you and whatever, you know, store something, you know, has to happen or, you know, smart people that you are seeing more and I would encourage you for the company to get involved? the more you get added to. Want to let you have the final word. And it's the way of the future. Great having you on the program. And thank you for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Schaffer | PERSON | 0.99+ |
Asim Khan | PERSON | 0.99+ |
Steve Ballmer | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David Torres | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Simon Crosby | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Simon | PERSON | 0.99+ |
Peter Sheldon | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Magento | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
PagerDuty | ORGANIZATION | 0.99+ |
CeCe | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
sixty percent | QUANTITY | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
NYC | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
3.5% | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
John | PERSON | 0.99+ |
48 hours | QUANTITY | 0.99+ |
34% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
1.7% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
fifteen percent | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
10th | QUANTITY | 0.99+ |
36 hours | QUANTITY | 0.99+ |
CSC | ORGANIZATION | 0.99+ |
Angry Birds | TITLE | 0.99+ |
700 servers | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
200 servers | QUANTITY | 0.99+ |
ten percent | QUANTITY | 0.99+ |
Suki Kunta | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
20 bars | QUANTITY | 0.99+ |
300,000 people | QUANTITY | 0.99+ |
Ashesh Badani, Stefanie Chiras & Joe Fitzgerald, Red Hat | AnsibleFest 2020
>> Narrator: From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020, brought to you by Red Hat. >> The ascendancy of massive clouds underscored the limits of human labor. People, they simply don't scale at the pace of today's technology. And this trend created an automation mandate for IT which has been further accentuated by the pandemic. The world is witnessing the build-out of a massively distributed system that comprises on-prem apps, public clouds and edge computing. The challenge we face is how to go from managing things you can see and touch to cost effectively managing, securing and scaling these vast systems. It requires an automation first mindset. Hello, everyone. This is Dave Vellante and welcome back to AnsibleFest 2020. We have a great panel to wrap up this show. With me are our three excellent guests and CUBE alums. Ashesh Badani is the Senior Vice President of Cloud Platforms at Red Hat. Ashesh, good to see you again. Thanks for coming on. >> Yeah, likewise. Thanks for having me on again, Dave. >> Stefanie Chiras is Vice President and General Manager of the RHEL Business Unit and my sports buddy. Stefanie, glad to see you back in the New England area. I knew you'd be back. >> Yeah, good to see you, Dave. Thanks for having us today. >> You're very welcome. And then finally, Joe Fitzgerald, longtime CUBE alum, Vice President and General Manager of the Management Business Unit at Red Hat. Joe, good to see you. >> Hey, Dave, good to be here with you. >> Ashesh, I'm going to start with you. Lay out the big picture for us. So how do you see this evolution to what we sometimes talk about as hybrid cloud, but really truly a hybrid cloud environment across these three platforms that I just talked about? >> Yeah, let me start off by echoing something that most of your viewers have probably heard in the past. There's always this notion about developers, developers, developers. And you know, that still holds true. We aren't going away from that anymore. Developers are the new kingmakers. But increasingly, as the scope and complexity of applications and services that are deployed in this heterogeneous environment increases, it's more and more about automation, automation, automation. In the times we live in today, even, you know, before dealing with the crises that, you know, we have, just the sheer magnitude of requirements that are being placed on enterprises and expectations from customers require us to be more and more focused on automating tasks which humans just can't keep up with. So you know, as we look forward, this conversation here today, you know, what Ansible's doing, you know, is squarely aimed at dealing with this complexity that we all face. >> So Stefanie, I wonder if you could talk about what it's going to take to implement what I call this true hybrid cloud, this connection and management of this environment. RHEL is obviously a key piece of that. That's going to be your business unit, but take us through your thoughts there. >> Yeah, so I'm kind of building on what Ashesh said. When we look at this hybrid cloud world, right, which now hybrid is much more than it was considered five years ago. It used to be hybrid was on-prem versus off-prem. Now, hybrid translates to many layers in the stack. It can be VMs hybrid with containers. It can be on-prem with off-prem and clearly with edge involved, as well. Whenever you start to require the ability to bridge across these, that's where we focus on having a platform that allows you to access sort of all of those and be able to deploy your applications in a simple way. When I look at what customers require, it's all about speed of deploying applications, right, build, deploy and run your applications. It's about stability, which is clearly where we're focused on RHEL being able to provide that stability across multiple types of hybrid deployment models. And third is all about scale. It is absolutely all about scale and that's across multiple ranges in hybrid, be it on-prem, off-prem, edge and that's where all of this automation comes in, so to me, it's really about where do you make those strategic decisions that allow you to choose, right, for the flexibility that you need and still be able to deploy applications with speed, have that stability, resiliency, and be able to scale. >> So Joe, let's talk about your swim lane and it's weird to even use that term, right? 'Cause as Stefanie just said, we're kind of breaking down all these silos that we talk in terms of platform, but how do you see this evolving, and specifically, what's the contribution from a management perspective? >> Right, so Stefanie and Ashesh talked about sort of speed, scale and complexity. Right, people are trying to deploy things faster or larger scale, and oh, by the way, keep everything highly available and secure. That's a challenge, right? And so, you know, interestingly enough, Red Hat, about five years ago, we recognized that automation was going to be a problem as people were moving into open hybrid clouds, which we've been working with our customers for years on. And so we acquired this small company called Ansible, which had some really early emerging technology, all open source, right, to do automation. And what we've done over the past five years is we've really amplified that automation and amplified the innovation in that community to be able to provide automation across a wide array of domains that you need to automate, right, and to be able to plug that in to all the different processes that people need in order to be able to go faster, but to track, manage, secure and govern these kind of environments. So we made this bet years ago and it's paying off for Red Hat in very big ways. >> I mean, no doubt about it. I mean, when you guys bought Ansible, so it wasn't clear that it was going to be the clear leader. It is now. I mean, it's pulled ahead of Chef, Puppet. You saw, you know, VMware bought Salt, but I mean, Ansible very clearly has, based on our surveys, the greatest market momentum. We're going to talk about that. I know some of the other analysts have chimed in on this, but let me come back to this notion of on-prem and cloud and edge and this is complicated. I mean, the edge, it's kind of its own island, isn't it? I mean, you got the IT and the OT schism, so maybe you could talk a little bit about how you see those worlds coming together, the cloud, the on-prem, the edge. Maybe Stefanie, you can start. >> Yeah, I think the magic, Dave, is going to happen when it's not its own island, right, as we start to see this world driven by data cause the spread of a data center to be really dis-aggregated and allow that compute to move out closer to the data, the magic happens when it doesn't feel like an island, right, that's the beauty and the promise of hybrid. So when you start to look at what can you provide that is consistent that serves as a single language that you can talk to from on-prem, off-prem and edge, you know, it all comes down to, for us, having a platform that you can build once and deploy across all of those, but the real delicacy with edge is there are some different deployment models. I think that comes into deployment space and we're clearly getting feedback from customers. We're working on some capabilities where edge requires some different deployment models in the ways you update, et cetera, and thanks to all of you out there who are working with us upstream in order to deliver that. And I think the second place where it's unique is in this ability to manage and automate out at the edge, but our goal is certainly at our platform levels, whether it be on RHEL, whether it be on OpenShift to provide that consistent platform that allows you that ease of deployment, then you got to manage and automate it and that's where the whole Ansible and the ecosystem really plays in. You need that ecosystem and that's always what I love about AnsibleFest is this community comes together and it's a vibrant community, for sure. >> Well, I mean, Ashesh, you guys are betting big on this and I often think of the cloud is just this one big cloud. You got the on-prem cloud, you got the public clouds. Edge becomes just an extension of that cloud. Is that how you think about it and what is it actually going to take to make that edge not an island? >> Yeah, great point, Dave, and that's exactly how we think about it. We've always thought about our vision of the cloud as being a platform and abstraction that spans all the underlying infrastructure that the user can take advantage of, so if it happens to reside in a data center, some in a private cloud running off a data center, more increasingly in the public cloud setting, and as Stefanie called out, we're also starting to see edge deployments come in. We're seeing, you know, big build-outs in the work we're doing with telecom providers from a 5G perspective that's helping drive that. We're seeing, if you will, IOT-like opportunities with, let's say, the automotive sector or some in the retail sector, as well. And so this fabric, if you will, needs to span this entire set of deployment that a customer will take advantage of. And Joe started touching on this a little bit, right, with this notion of the speed, scale and complexity, so we see this platform needing to expand to all these footprints that customers are using. At the same time, the requirements that they have, even when they're going out the edge, is the same with regard to what they see in the data center and the public cloud, so putting all that together really is our sweet spot. That's our focus. And to the point you're making, Dave, that's where we're making a huge bet across all of Red Hat. >> So I mentioned, you know, some of our research and I do these breaking analysis segments every week and recently I was digging into cloud and specifically was interested in hybrid and multi. And you know, hybrid been I think pretty well understood for awhile. Multi I think was a lot of, you know, a lot of talk, but it's becoming real and the data really shows that. It shows OpenShift and Ansible have momentum. I mentioned that before. Yeah, you know, obviously VMware is there, but clearly Red Hat is well positioned specifically in multicloud and hybrid. And I know some of the other analyst firms have picked up on this. What are you guys seeing in the market? Maybe Joe, you can chime in and Ashesh, you can maybe add some color. >> Yeah, so you know, there's a lot of fashion, right, around hybrid and multicloud today, so every vendor is jumping on with multicloud storing. And you know, a lot of the vendors' strategies are, pick my solution and vertically use my stuff in the public cloud on-premise, maybe even at the edge, right, and you'll be fine. And you know, obviously customers don't like lock-in. They like to be able to take advantage of the best services, availability, security, different things that are available in each of these different clouds, right? So there is a strong preference for hybrid and multicloud. Red Hat is sort of the Switzerland of hybrid and multicloud because we enable you to run your workloads across all these different substrates, whether it's in public clouds, multiple, right, into the data center and physical, virtual, bare metal, out to the edge and edge is not a single homogeneous, you know, set of hardware or even implementation. It varies a lot by vertical, so you have a lot of diversity, right? And so Red Hat is really good at helping provide the platforms like OpenShift and RHEL that are going to provide that consistency across those different environments or also in the case of Ansible to provide automation that's going to match the physics of management and automation that are required across each of those different environments. Trust me, managing or automating something at the edge and with very small footprint of some device across the constraint network is very, very different than managing things in a public cloud or in a data center and that's where I think Red Hat is really focused and that's our sweet spot, helping people manage those environments. >> And Ashesh, you guys have obviously put a lot of effort there. If you could maybe comment. >> Yeah, I was just going to say, Dave, I'll add just really quickly to what Joe said. He said it well. But the thing I will add is the way for us to succeed here is to follow the user, follow the customer. Right, instead of us just coming out with regard to what we believe the path to be, you know, we're really kind of working closely with the actual customers that we have. So for example, recently been working with a large water utility in Italy, but they're thinking about, you know, the world that they live in and how can they go off and, you know, have kiosks that are spread throughout Italy, able to provide reports with regard to the quality of the water that's available, as well as other services to all their citizens. But it's really interesting use case for us to go off and pursue because in some sense, you can ask yourself, well, is that public cloud? Are they going to take advantage of those services? Is that, you know, private cloud? Is that data center, is that IOT, is that edge? At a certain point in time, what you've got to think about is, well, we've got to provide integrated end-to-end solution that spans all of these different worlds, and so as long as I think we keep that focus, as long as we make sure our North Star is really what the user's trying to do, what problem they're trying to solve, I think we'll come out just fine on the other side of this. >> So I'd love to get all your thoughts, all three of you, on just what's going on in containers, generally, Kubernetes, specifically. I mean, everybody knows it's a hot space and the data shows that it is maturing, but it's amazing to me how much momentum it still has. I mean, it's like the new shiny toy, but it's everywhere and so it's able to sort of maintain that velocity and it's really becoming the go-to cloud native development platform, so the question is how is Red Hat, you know, helping your customers connect OpenShift to the rest of their IT infrastructure, platforms, their processes, the tools. I mean, who wants to start? I'd love to hear from all three of you. Ashesh, why don't you kick it off and then we'll just go left to right. >> So Dave, we've spoken to you and to folks the CUBE, as well, other for many years on this. We've made a huge investment in the Kubernetes market and been one of the earliest to do that and we continue to believe in the promise that it delivers to users, this notion of being able to have an environment that customers can use regardless of the underlying choices that they make. Here's an extremely powerful one, it's truly an open source, right? This is key to, you know, what we do. Increasingly, what we're working on is to ensure that one, if you make a commitment to Kubernetes and increasingly we see lots of customers around the world doing that, that we ensure that we're working closely, that our entire portfolio helps support that. So if you're going to make a choice with regard to Kubernetes base deployment, we help support you running it yourself wherever it is that you choose to run it, we help support you whether you choose to have us manage on your behalf and then also make sure we're providing an entire portfolio of services, both within Red Hat as well as from third parties so that you have the most productive, integrated experience possible. >> Okay, and Stefanie, loved your point of view on this, and Joe, I'd love to understand how you're bridging kind of the Ansible and Kubernetes communities, but Stefanie, why don't you chime in first? >> Yeah, I'll quickly add to what Ashesh said and talked about well on really the promise and the value of containers, but particularly from a RHEL perspective, we have taken all our capabilities and knowledge in the Linux space and we have taken that to apply it to OpenShift, right, because Kubernetes and containers is just another way to deploy Linux, so making sure that that underpinning is stable, secure and resilient and tied to an ecosystem, right? An ecosystem of various architectures, an ecosystem of ISVs and tooling, right? We've pulled that together and everything we've done in Linux for, you know, over decades now at Red Hat and we've put that into that customer experience around OpenShift to deploy containers, so we've really built, it has been a portfolio-wide effort, as Ashesh alluded to, and of course, it passes over to Ansible as well with Joe's portfolio. >> Yeah, we talked about this upfront, Joe. The communities are so crucial, so how are you bridging those Ansible and Kubernetes communities? What's your thought on that? >> Well, a quick note about those communities. So you know, OpenShift is built on Kubernetes and a number of other projects. Kubernetes is number seven in the top 10 open source projects based on the number of contributors. Turns out Ansible is number nine, right? So if you think about it, these are two incredibly robust communities, right? On the one hand, building the container platform in Kubernetes and in the other around Ansible and automation. It turns out that as the need for this digital acceleration and building these container-based applications comes along, there's a lot of other things that have to be done when you deploy container-based applications, whether it's infrastructure automation, right, to expand and manage and automate the infrastructure that you're running your container-based applications on, creating more clusters, you know, configuring storage, network, you know, counts, things like that, but also connecting to other systems in the environment that need to be integrated with around, you know, ITSM or systems of record, change management, inventory, cost, things like that, so what we've done is we've integrated Ansible, right, in a very powerful way with OpenShift through our advanced cluster management capability, which allows us to provide an easy way to instrument Ansible during critical points, whether it's you're deploying new clusters out there or you're deploying a new version of an application or a new application for the first time, whether you're checking policy, right, to ensure that, you know, the thing is secure and that, you know, you can govern these environments, right, that you're relying on. So we've really now tied together two sort of de facto standards, OpenShift built on Kubernetes and a number of other projects and then Ansible, or Red Hat, has taken this innovation in the community and created these certified content collections, platforms and capabilities that people can actually build and rely on and know that it's going to work. >> Ashesh, I mean, Red Hat has earned the right, really, to play in both the cloud native world and of course the traditional infrastructure world, but I'm interested in what you're seeing there, how you're bringing those two worlds together. Are they still, you know, largely separate? Are you seeing traditional IT? I mean, you're certainly seeing them lean in to more and more cloud native, but what are you guys doing specifically to kind of bring those worlds together? >> Yeah, increasingly it's really hard to be able to separate out those worlds, right? So in the past, we used to call it shadow IT. There really is no shadow IT anymore, right? This is IT. So we've embraced that completely. You know, our take on that is to say there are certain applications that are going to be appropriate for being run in a data center a certain way. There are certain other workloads that'll find their way appropriate for the public cloud. We want to make sure we're meeting them across, but what we want to do is constantly introduce technologies to help support the choices customers make. What do I mean by that? Let me give a couple examples. One is, you know, we can say customers have VMs that are based out in specific environments and they can only run as VMs. That code can't be containerized for a variety of reasons, right? You know, hard to re-architect that, don't have the funds, you know, have certain security compliance reasons. Well, what if we could take those VMs and then have them be run in containers in a native fashion? Wouldn't that be extremely powerful value proposition to run containers and then VMs as containers sort of side by side with Kubernetes orchestrating them all. So that's a capability we call open source virtualization. We've introduced that and made that generally available within our platform. Another one, which I think Joe starting to touch on a little bit here, is both around this notion of Ansible, as well as advanced cluster management. And say, once technologies like Ansible are familiar to our customers, how about if we find ways to introduce things like the operator framework to help support people's use of Ansible and introduce technologies like advanced cluster management, which allows for us to say, well, regardless of where you run your clusters, whether you run your Kubernetes clusters on premise, you run them in the cloud, right, we can imagine a consistent fashion and manage, you know, health and policy and compliance of applications across that entire state. So David, question's extremely good one, right, but what we are trying to do is try to be able to say, you know, we are going to just span those two worlds and provide as many tools as possible to ensure that customers feel like, you know, the shift, if you will, or the move between traditional enterprise software application development and the more modern cloud native can be bridged as seamlessly as possible. >> Yeah, Joe, we heard a lot of this at AnsibleFest, so the ACM as a key component of your innovation, and frankly, your competitive posture. Anything you would add to what Ashesh just shared? >> Well, I think that one of the things that Red Hat is really good at is we take management and automation as sort of an intrinsic part of what needs to go on. It's not an afterthought. You just don't go build something, go, "Oh I need management," go out and, you know, go get something, right, so we've been working on, sort of automation and management for many, many years, right, so we build it in concert with these platforms, right, and we understand the physics of these different environments, so we're very focused on that from inception, as opposed to an afterthought when people sort of paint themselves into a corner or have management challenges they can't deal with. >> There's a lot of analogs in our business, isn't there? Management is a bolt-on and security is a bolt-on. It just doesn't work that well and certainly doesn't scale. Stefanie, I want to come back to you and I want to come back to the edge. We hear a lot of people talking about extending their deployments to the edge in the future. I mean, you look at what IBM's doing. They're essentially betting its business on RHEL and OpenShift and betting that its customers are going to do the same as well are you. Maybe talk about, you know, what you're doing to specifically extend RHEL to the edge. >> Yeah, Dave, so we've been looking at this space consistent with our strategy, as Ashesh talked about, right? Our goal is to make sure that it all looks and feels the same and provides one single Linux experience. We've been building on a number of those aspects for quite some time, things like being able to deal with heterogeneous architectures, as an example, being able to deal with, you know, having Arm components and x86 components and power components and being able to leverage all of that from multiple vendors and being able to deploy. Those are things we've been focused on for a long time and now when you move into the space of the edge, certainly we're seeing, you know, essentially data center level hardware move out to be dis-aggregated and dispersed as they move it closer to the data and where that's coming in and where the analysis needs to be done, but some of those foundational things that we've been working on for years starts to pay off because the edge tends to be more heterogeneous all the way from an architecture level to an application level, so now we're seeing some asks. We've been working upstream in order to pull in some features that drive capabilities around specifically updating, deploying those updates, doing rollbacks and things like that, so we're focused on that. But really, it's about pulling together the capabilities of having multiple architectures, dealing with heterogeneous infrastructure out there at the edge, being able to reliably deploy it even when, for example, we have customers who they deploy their hardware and they can't touch it for years. How do they make sure that that's out there in a stable environment that they can count on? And then, you know, adding in things like containerization. We talked about the magic of that, being able to deploy an application consistently and being able to deploy a single container out there to the edge. We're thinking about it all the way from the architecture up to how the application gets deployed and it's going to take the whole portfolio to do that as you need to manage it, as you need to deploy containers, so it's a focus across the company for how we deal with that. >> And as we were talking about before, you know, it takes a village. You know that bromide, but it does, requires an ecosystem of jobs. I mean, there's some real technical challenges in R&D that has to happen. I mean, you've got to be, you know, you're talking about cloud native in all three different clouds, and you know, and not just the big three, but other clouds and then bringing that to the edge, so there's some clear technical challenges, but there's also some business challenges out there. So you know, what are you seeing in that regard? You know, what are some of those things that you hope to solve by bridging that gap? >> Well, I think one of the things we're trying to do and I'm focused on the management and automation side is to provide a common set of management tooling of automation, right, and I think Ansible fits that quite well. So for the past five years since Ansible's been part of Red Hat, we've expanded from, you know, they started off initially doing configuration management, right? We've expanded to include, you know, network and storage and security, now edge. At AnsibleFest, we demonstrated things like serverless event-driven automation, right, building an OpenShift serverless in Knative. We're trying to expand the use cases for Ansible so that there's a simplicity, there's a tool reduction, right, across all these environments and you don't have to go deal with nine vendors, and you know, 17 different tools to try to manage each element here to be able to provide a common set. It reduces complexity, cost and allows skills to be able to be reused across these different areas. It's going to all be about digital acceleration, right, and reducing that complexity. And one last comment. One of the reasons we bought Ansible years ago is the architecture, it's agent-less. Many of our competitors that you hear, the first thing they want to do is go deploy an agent somewhere and that creates its own ongoing burden of, do I have the latest version of the agent? Is it secure? Does it fit on the device? As Stefanie mentioned, is there a version that fits on the architecture the device is running on? It starts getting really, really complicated. So Ansible is just simple, elegant, agent-less. We've expanded the domains we can automate with it and we've expanded sort of the modality. How can I call it? User, driven by an event, as part of some life cycle management, app deployment, Ansible plugs right in. >> Well, Joe, you can tell you're a management guy, right? Agents, another thing that has to be managed. You just laundry list of stuff. (laughs) I want to come back to this notion Joe just touched on, this digital transformation. They say, "If it ain't broke, don't fix it." Well, COVID broke everything. And I got to say, I mean, all the talk about digital transformation over the last, you know, several years, yes, it was certainly happening, but there was also a lot of lip service going on and now if you're not digital, you're out of business. And so, you know, given everything that we've seen in the last, you know, whatever, 150, 200 days or so, what's the impact that you're seeing on customers' digital transformation initiatives, and you know, what is Red Hat doing to respond? Maybe Ashesh, you could start and we can get feedback from the others. >> Yeah, David, it's an unfortunate thing to say, right, but there's that meme going around with regard to who's responsible for digital transformation and it's a little bit of I guess gallows humor to call it COVID, but we're increasingly seeing that customers and the journey that they're on is one that they haven't really gotten off, even with this, if you will, change of environment that's come about. So projects that we've seen in play, you know, are still underway. We've seen acceleration, actually, in some places with regard to making services more easily accessible. Anyone who's invested in hybrid cloud or public cloud is seeing huge value with regard to being able to consume services remotely, being able to do this on demand and that's a big part of the value proposition, you know, that comes forward. And increasingly what we're trying to do is try to say, how can we engage and assist you in these times, right? So our services team, for example, has transformed to be able to help customers remotely. Our support team has gone off and work more and more with customers. For a company like Red Hat, that hasn't been completely, if you will, difficult thing to do mostly because we've been so used to working in a distributed fashion, working remotely with our customers, so that's not a challenge in itself, but making sure customers understand that this is really a critical journey for them to go on and how we can kind of help them, you know, walk through that has been good and we're finding that that message really resonates. Right, so both Stefanie and Joe talked a little bit about, you know, how essentially our entire portfolio is now built around, you know, ensuring that if you'd like to consume on demand, we can help support you, if you'd like to consume in a traditional fashion, we can help you. That amount of flexibility that we provide to customers is really coming to bear at this point in time. >> So maybe we could wrap with, we haven't really dropped any customer names. Stefanie and Joe and Ashesh, I wonder if you have any stories you can share or, you know, customer examples that we could close on that are exciting to you this year. >> So I can start, if that's okay. >> Please. >> So an area that I find super interesting from a customer perspective that we're increasingly seeing more and more customers go down is sheer interest in, if you will, kind of diversity of use cases that we're seeing, right? So we see this, for example, in automotive, right? So whether it's a BMW or a Volkswagen, we see this now in health care with the ACA, in we'll say a little bit more traditional industries like energy with Exxon or Schlumberger around increasingly embrace of AIML, right? So artificial machine learning, if you will, advanced analytics being much more proactive with regard to how they can take data that's coming in, adjust it, be able to make sense of the patterns and then be able to, you know, have some action that has real business impact. So this whole trend towards, you know, AIML workloads that they can run is extremely powerful. We work very closely with Nvidia, as well, and we're seeing a lot of interest, for example, in being able to run a Kubernetes-based platform, support Nvidia GPUs for specific class workloads. There's a whole bunch of customers, people in financial services that, you know, this is a rich area of interest. You know, we've seen great use cases for example around grid with Deutsche Bank. And so, to me, I'm personally really excited to see kind of that embrace the PC from our customers regard to saying there's a whole lot of data that's out there. You know, how can we essentially use all of these tools that we have in place? You know, we talk about containers, microservices, DevOps, you know, all of this and then put it to bear to really put to work and get business value. >> Great, thank you for that, Ashesh. Stefanie, Joe, Stefanie, anything you want to add or final thoughts? >> Yeah, just one thing to add and I think Ashesh talked to a whole number across industry verticals and customers. But I think the one thing that I've seen through COVID is that if nothing else, it's taught us that change is the only constant and I think, you know, our whole vision of open hybrid cloud is how to enable customers to be flexible and do what they need to do when they need to do it, wherever they want to deploy, however they want to build. We provide them some consistency, right, across that as they make those changes and I think as I've worked with customers here through since the beginning of COVID, it's been amazing to me the diversity of how they've had to respond. Some have doubled down in the data center, some have doubled down on going public cloud and to me, this is the proof of the strategy that we're on, right, that open hybrid cloud is about delivering flexibility, and boy, nothing's taught us the need for flexibility like COVID has recently, so I think there's a lot more to do. I think pulling together the platforms and the automation is what is going to enable the ability to do that in a simple fashion. >> So Joe, you get the final word. I mean, AnsibleFest 2020, I mean, it's weird, right? But that's the way these events are, all virtual. Hopefully, next year we got a shot at being face to face, but bring us home, please. >> Yeah, I got to tell ya, having, you know, 20,000 or so of your closest friends get together to talk about automation for a couple of days is just amazing. That just shows you sort of the power of it. You know, we have a lot of customers this week at AnsibleFest telling you their story, you know, CarMax and ExxonMobil, you know, BlueCross BlueShield. I mean, there's a number across all different verticals, globally, Cepsa from Europe. I mean, just an incredibly, you know, diverse array of customers and use cases. I would encourage people to look at some of the customer presentations that were on at AnsibleFest, listen to the customer telling you what they're doing with Ansible, deploying their networks, deploying their apps, managing their infrastructure, container apps, traditional apps, connecting it, moving faster. They have amazing stories. I encourage people to go look. >> Well, guys, thanks so much for helping us wrap up AnsibleFest 2020. It was really a great discussion. You guys have always been awesome CUBE guests. Really appreciate the partnership and so thank you. >> Thanks a lot, Dave. Appreciate it. >> Yeah, thanks, Dave. >> Thanks for having us. >> All right, and thank you for watching, everybody. This is Dave Vellante for theCUBE and we'll see you next time. (calm music)
SUMMARY :
brought to you by Red Hat. Ashesh, good to see you again. Thanks for having me on again, Dave. Stefanie, glad to see you Yeah, good to see you, Dave. of the Management Ashesh, I'm going to start with you. So you know, as we look forward, That's going to be your business unit, so to me, it's really about where do you that you need to automate, You saw, you know, VMware bought Salt, and thanks to all of you out there Is that how you think about it And so this fabric, if you will, and Ashesh, you can maybe add some color. Yeah, so you know, And Ashesh, you guys have obviously you know, the world that they live in and so it's able to sort and been one of the earliest to do that and knowledge in the Linux space so how are you bridging those Ansible right, to ensure that, you know, and of course the traditional and manage, you know, health and policy so the ACM as a key go out and, you know, go get something, I mean, you look at what IBM's doing. being able to deal with, you and you know, and not just the big three, We've expanded to include, you know, in the last, you know, whatever, you know, that comes forward. that are exciting to you this year. and then be able to, you Stefanie, anything you want and I think, you know, our whole So Joe, you get the final word. listen to the customer telling you Really appreciate the Thanks a lot, Dave. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Stefanie | PERSON | 0.99+ |
Dave Valenti | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Frank Luman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Joe | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Exxon | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Werner | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Joe Fitzgerald | PERSON | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Jessie | PERSON | 0.99+ |
ExxonMobil | ORGANIZATION | 0.99+ |
Jon Sakoda | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ashesh | PERSON | 0.99+ |
Jesse | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
LA | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Johnson | PERSON | 0.99+ |
Dave allante | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Krishna Doddapaneni, VP, Software Engineering, Pensando | Future Proof Your Enterprise 2020
>>From the cube studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a cute conversation. Hi, welcome back. I'm Stu middleman. And this is a cube conversation digging in with, talking about what they're doing to help people. Yeah. Really bringing some of the networking ideals to cloud native environment, both know in the cloud, in the data centers program, Krishna penny. He is the vice president of software. Thanks so much for joining us. Thank you so much for talking to me. Alright, so, so Krishna the pin Sandow team, uh, you know, very well known in the industry three, uh, you innovation. Yeah. Especially in the networking world. Give us a little bit about your background specifically, uh, how long you've been part of this team and, uh, you know, but, uh, you know, you and the team, you know? Yeah. >>And Sando. Yup. Um, so, uh, I'm VP of software in Sandow, um, before Penn Sarno, before founding concern, though, I worked in a few startups in CME networks, uh, newer systems and Greenfield networks, all those three startups have been acquired by Cisco. Um, um, my recent role before this, uh, uh, this, this company was a, it was VP of engineering and Cisco, uh, I was responsible for a product called ACA, which is course flagship SDN tonic. Mmm. So I mean, when, why did we find a phone, uh, Ben Sandoz? So when we were looking at the industry, uh, the last, uh, a few years, right? The few trends that are becoming clear. So obviously we have a lot of enterprise background. We were watching, you know, ECA being deployed in the enterprise data centers. One sore point for customers from operational point of view was installing service devices, network appliances, or storage appliances. >>So not only the operational complexity that this device is bringing, it's also, they don't give you the performance and bandwidth, uh, and PPS that you expect, but traffic, especially from East West. So that was one that was one major issue. And also, if you look at where the intelligence is going, has been, this has been the trend it's been going to the edge. The reason for that is the motors or switches or the devices in the middle. They cannot handle the scale. Yeah. I mean, the bandwidths are growing. The scale is growing. The stateful stuff is going in the network and the switches and the appliances not able to handle it. So you need something at the edge close to the application that can handle, uh, uh, this kind of, uh, services and bandwidth. And the third thing is obviously, you know, x86, okay. Even a few years back, you know, every two years, you know, you're getting more transistors. >>I mean, obviously the most lined it. And, uh, we know we know how that, that part is going. So the it's cycles are more valuable and we don't want to use them for this network services Mmm. Including SDN or firewalls or load balancer. So NBME, mutualization so looking at all these trends in the industry, you know, we thought there is a good, uh, good opportunity to do a domain specific processor for IO and build products around it. I mean, that's how we started Ben signed off. Yeah. So, so Krishna, it's always fascinating to watch. If you look at startups, they are often yeah. Okay. The time that they're in and the technologies that are available, you know, sometimes their ideas that, you know, cakes a few times and, you know, maturation of the technology and other times, you know, I'll hear teams and they're like, Oh, well we did this. >>And then, Oh, wow. There was this new innovation came out that I wish I had add that when I did this last time. So we do, a generation. Oh, wow. Talking about, you know, distributed architectures or, you know, well, over a decade spent a long time now, uh, in many ways I feel edge computing is just, you know, the latest discussion of this, but when it comes to, and you know, you've got software, uh, under, under your purview, um, what are some of the things that are available for that might not have been, you know, in your toolkit, you know, five years ago. Yeah. So the growth of open source software has been very helpful for us because we baked scale-out microservices. This controller, like the last time I don't, when we were building that, you know, we had to build our own consensus algorithm. >>We had to build our own dishwasher database for metrics and humans and logs. So right now, uh, we, I mean, we have, because of open source thing, we leverage CD elastic influx in all this open source technologies that you hear, uh, uh, since we want to leverage the Kubernetes ecosystem. No, that helped us a lot at the same time, if you think about it. Right. But even the software, which is not open source, close source thing, I'm maturing. Um, I mean, if you talk about SDN, you know, seven APS bank, it was like, you know, the end versions of doing off SDN, but now the industry standard is an ADPN, um, which is one of the core pieces of what we do we do as Dean solution with DVA. Um, so, you know, it's more of, you know, the industry's coming to a place where, you know, these are the standards and this is open source software that you could leverage and quickly innovate compared to building all of this from scratch, which will be a big effort for us stocked up, uh, to succeed and build it in time for your customer success. >>Yeah. And Krishna, I, you know, you talk about open forum, not only in the software, the hardware standards. Okay. Think about things, the open compute or the proliferation of, you know, GPS and, uh, everything along that, how was that impact? I did. So, I mean, it's a good thing you're talking about. For example, we were, we are looking in the future and OCP card, but I do know it's a good thing that SEP card goes into a HP server. It goes into a Dell software. Um, so pretty much, you know, we, we want to, I mean, see our goal is to enable this platform, uh, that what we built in, you know, all the use cases that customer could think of. Right. So in that way, hardware, standardization is a good thing for the industry. Um, and then same thing, if you go in how we program the AC, you know, we at about standards of this people, programming, it's an industry consortium led by a few people. >>Um, we want to make sure that, you know, we follow the standards for the customer who's coming in, uh, who wants to program it., it's good to have a standards based thing rather than doing something completely proprietary at the same time you're enabling innovations. And then those innovations here to push it back to the open source. That's what we trying to do with before. Yeah. Excellent. I've had some, some real good conversations about before. Um, and, and the way, uh, and Tondo is, is leveraging that, that may be a little bit differently. You know, you talk about standards and open source, oftentimes it's like, well, is there a differentiator there, there are certain parts of the ecosystem that you say, well, kind of been commodified. Mmm. Obviously you're taking a lot of different technologies, putting them together, uh, help, help share the uniqueness. Okay. And Tondo what differentiates, what you're doing from what was available in the market or that I couldn't just cobbled together, uh, you know, a bunch of open source hardware and software together. >>Yeah. I mean, if you look at a technologist, I think the networking that both of us are very familiar with that. If you want to build an SDN solution, or you can take a, well yes. Or you can use exhibit six and, you know, take some much in Silicon and cobble it together. But the problem is you will not get the performance and bandwidth that you're looking for. Okay. So let's say, you know, uh, if you want a high PPS solution or you want a high CPS solution, because the number of connections are going for your IOT use case or Fiji use case, right. If you, uh, to get that with an open source thing, without any assist, uh, from a domain specific processor, your performance will be low. So that is the, I mean, that's once an enterprise in the cloud use case state, as you know, you're trying to pack as many BMCs containers in one set of word, because, you know, you get charged. >>I mean, the customer, uh, the other customers make money based on that. Right? So you want to offload all of those things into a domain specific processor that what we've built, which we call the TSC, which will, um, which we'll, you know, do all the services at pretty much no cost to accept a six. I mean, it's to six, you'll be using zero cycles, a photo doing, you know, features like security groups or VPCs, or VPN, uh, or encryption or storage virtualization. Right. That's where that value comes in. I mean, if you count the TCO model using bunch of x86 codes or in a bunch of arm or AMD codes compared to what we do. Mmm. A TCO model works out great for our customers. I mean, that's why, you know, there's so much interest in a product. Excellent. I'm proud of you. Glad you brought up customers, Christina. >>One of the challenges I have seen over the years with networking is it tends to be, you know, a completely separate language that we speak there, you know, a lot of acronyms and protocols and, uh, you know, not necessarily passable to people outside of the silo of networking. I think back then, you know, SDN, uh, you know, people on the outside would be like, that stands for still does nothing, right? Like networking, uh, you know, mumbo jumbo there for people outside of networking. You know what I think about, you know, if I was going to the C suite of an enterprise customer, um, they don't necessarily care about those networking protocols. They care about the, you know, the business results and the product Liberty. How, how do you help explain what pen Sandow does to those that aren't, you know, steeped in the network, because the way I look at it, right? >>What is customer looking? But yeah, you're writing who doesn't need, what in cap you use customer is looking for is operational simplicity. And then he wants looking for security. They, it, you know, and if you look at it sometimes, you know, both like in orthogonal, if you make it very highly secure, but you make it like and does an operational procedure before you deploy a workload that doesn't work for the customer because in operational complexity increases tremendously. Right? So it, we are coming in, um, is that we want to simplify this for the customer. You know, this is a very simple way to deploy policies. There's a simple way to deploy your networking infrastructure. And in the way we do it is we don't care what your physical network is, uh, in some sense, right? So because we are close to the server, that's a very good advantage. >>We have, we have played the policies before, even the packet leaves the center, right? So in that way, he knows his fully secure environment and we, and you don't want to manage each one individually, we have this, okay, Rockwell PSM, which manages, you know, all this service from a central place. And it's easy to operationalize a fabric, whether you talk about upgrades or you talk about, you know, uh, deploying new services, it's all driven with rest API, and you can have a GUI, so you can do it a single place. And that's where, you know, a customer's value is rather than talking about, as you're talking about end caps or, you know, exactly the route to port. That is not the main thing that, I mean, they wake up every day, they wake up. Have you been thinking about it or do I have a security risk? >>And then how easy for me is to deploy new, uh, in a new services or bring up new data center. Right. Okay. Krishna, you're also spanning with your product, a few different worlds out. Yeah. You know, traditionally yeah. About, you know, an enterprise data center versus a hyperscale public cloud and ed sites, hi comes to mind very different skillset for management, you know, different types of okay. Appointments there. Mmm. You know, I understand right. You were going to, you know, play in all of those environments. So talk a little bit about that, please. How you do that and, you know, you know, where you sit in, in that overall discussion. Yes. So, I mean, a number one rule inside a company is we are driven by customers and obviously not customer success is our success. So, but given said that, right. What we try to do is that we try to build a platform that is kind of, you know, programmable obviously starting from, you know, before that we talked about earlier, but it's also from a software point of view, it's kind of plugable right. >>So when we build a software, for example, at cloud customers, and they use BSC, they use the same set of age KPI's or GSP CRS, TPS that DSC provides their controller. But when we ship the same, uh, platform, what enterprise customers, we built our own controller and we use the same DC APS. So the way we are trying to do is things is fully leverage yeah. In what we do for enterprise customers and cloud customers. Mmm. We don't try to reinvent the wheel. Uh, obviously at the same time, if you look at the highest level constructs from a network perspective, right. Uh, audience, for his perspective, what are you trying to do? You're trying to provide connectivity, but you're trying to avoid isolation and you're trying to provide security. Uh, so all these constructs we encapsulated in APA is a, which, you know, uh, in some, I, some, some mostly like cloud, like APS and those APIs are, are used, but cloud customers and enterprise customers, and the software is built in a way of it. >>Any layer is, can be removed on any layer. It can be hard, right? Because it's not interested. We don't want to be multiple different offers for different customers. Right. Then we will not scale. So the idea when we started the software architecture, is that how we make it pluggable and how will you make the program will that customer says, I don't want this piece of it. You can put them third party piece on it and still integrate, uh, at a, at a common layer with using. Yeah. Yeah. Well, you know, Krishna, you know, I have a little bit of appreciation where some of the hard work, what your team has been doing, you know, a couple of years in stealth, but, you know, really accelerating from, uh, you know, the announcement coming out of stealth, uh, at the end of 2019. Yeah. Just about half a year, your GA with a major OEM of HPE, definitely a lot of work that needs to be done. >>It brings us to, you know, what, what are you most proud about from the work that your team's doing? Uh, you know, we don't need to hear any, you know, major horror stories, but, you know, there always are some of them, you know, not holes or challenges that, uh, you know, often get hidden yeah. Behind the curtain. Okay. I mean, personally, I'm most proud of the team that we've made. Um, so, uh, you know, obviously, you know, uh, our executors have it good track record of disrupting the market multiple times, but I'm most proud of the team because the team is not just worried about that., uh, that, uh, even delegate is senior technologist and they're great leaders, but they're also worried about the customer problem, right? So it's always about, you know, getting the right mix, awfully not execution combined with technology is when you succeed, that is what I'm most proud of. >>You know, we have a team with, and Cletus running all these projects independently, um, and then releasing almost we have at least every week, if you look at all our customers, right. And then, you know, being a small company doing that is a, Hmm, it's pretty challenging in a way. But we did, we came up with methodologists where we fully believe in automation, everything is automated. And whenever we release software, we run through the full set of automation. So then we are confident that customer is getting good quality code. Uh, it's not like, you know, we cooked up something and that they should be ready and they need to upgrade to the software. That's I think that's the key part. If you want to succeed in this day and age, uh, developing the features at the velocity that you would want to develop and still support all these customers at the same time. >>Okay. Well, congratulations on that, Christian. All right. Final question. I have for you give us a little bit of guidance going forward, you know, often when we see a company out and we, you know, to try to say, Oh, well, this is what company does. You've got a very flexible architecture, lot of different types of solutions, what kind of markets or services might we be looking at a firm, uh, you know, download down the road a little ways. So I think we have a long journey. So we have a platform right now. We already, uh, I mean, we have a very baby, we are shipping. Mmm Mmm. The platforms are really shipping in a storage provider. Uh, we are integrating with the premier clouds, public clouds and, you know, enterprise market, you know, we already deployed a distributed firewall. Some of the customers divert is weird firewall. >>So, you know, uh, so if you take this platform, it can be extendable to add in all the services that you see in data centers on clubs, right. But primarily we are driven from a customer perspective and customer priority point of view. Mmm. So BMW will go is even try to add more ed services. We'll try to add more storage features. Mmm. And then we, we are also this initial interest in service provider market. What we can do for Fiji and IOT, uh, because we have the flexible platform. We have the, see, you know, how to apply this platform, this new application, that's where it probably will go into church. All right. Well, Krishna not a penny vice president of software with Ben Tondo. Thank you so much for joining us. Thank you, sir. It was great talking to you. All right. Be sure to check out the cube.net. You can find lots of interviews from Penn Sundo I'm Stu Miniman and thank you. We're watching the cute.
SUMMARY :
uh, you know, very well known in the industry three, uh, you innovation. you know, ECA being deployed in the enterprise data centers. you know, every two years, you know, you're getting more transistors. and, you know, maturation of the technology and other times, you know, I'll hear teams and they're like, This controller, like the last time I don't, when we were building that, you know, we had to build our own consensus Um, so, you know, it's more of, you know, the industry's coming to a place where, this platform, uh, that what we built in, you know, all the use cases that customer could Um, we want to make sure that, you know, we follow the standards for the customer who's coming in, I mean, that's once an enterprise in the cloud use case state, as you know, you're trying to pack as many BMCs I mean, that's why, you know, there's so much interest in a product. to be, you know, a completely separate language that we speak there, you know, you know, and if you look at it sometimes, you know, both like in orthogonal, And that's where, you know, a customer's value is rather than talking about, as you're talking about end caps you know, programmable obviously starting from, you know, before that we talked about earlier, Uh, obviously at the same time, if you look at the highest but, you know, really accelerating from, uh, you know, the announcement coming out of stealth, Um, so, uh, you know, obviously, you know, uh, our executors have it good track And then, you know, being a small company doing that is a firm, uh, you know, download down the road a little ways. So, you know, uh, so if you take this platform, it can be extendable to add
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Christina | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Ben Sandoz | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Ben | PERSON | 0.99+ |
Ben Tondo | PERSON | 0.99+ |
Krishna Doddapaneni | PERSON | 0.99+ |
Sando | PERSON | 0.99+ |
Krishna | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
cube.net | OTHER | 0.99+ |
both | QUANTITY | 0.99+ |
one major issue | QUANTITY | 0.98+ |
six | QUANTITY | 0.98+ |
Stu middleman | PERSON | 0.98+ |
five years ago | DATE | 0.98+ |
2020 | DATE | 0.98+ |
one set | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Penn Sundo | ORGANIZATION | 0.97+ |
HPE | ORGANIZATION | 0.97+ |
AMD | ORGANIZATION | 0.96+ |
One sore point | QUANTITY | 0.96+ |
DVA | ORGANIZATION | 0.94+ |
ECA | ORGANIZATION | 0.94+ |
Cletus | PERSON | 0.94+ |
each one | QUANTITY | 0.93+ |
single place | QUANTITY | 0.93+ |
2019 | DATE | 0.92+ |
One | QUANTITY | 0.91+ |
Sandow | LOCATION | 0.9+ |
zero cycles | QUANTITY | 0.9+ |
end | DATE | 0.9+ |
Rockwell PSM | ORGANIZATION | 0.88+ |
Penn Sarno | ORGANIZATION | 0.88+ |
Sandow | PERSON | 0.86+ |
Fiji | ORGANIZATION | 0.86+ |
seven | QUANTITY | 0.85+ |
Pensando | ORGANIZATION | 0.84+ |
ACA | ORGANIZATION | 0.83+ |
Kubernetes | ORGANIZATION | 0.82+ |
IOT | ORGANIZATION | 0.82+ |
Tondo | ORGANIZATION | 0.79+ |
APS | ORGANIZATION | 0.79+ |
word | QUANTITY | 0.77+ |
Christian | ORGANIZATION | 0.77+ |
about half a year | QUANTITY | 0.77+ |
a few years back | DATE | 0.76+ |
SDN | ORGANIZATION | 0.76+ |
Liberty | ORGANIZATION | 0.75+ |
x86 | OTHER | 0.74+ |
over a decade | QUANTITY | 0.72+ |
two years | QUANTITY | 0.68+ |
East West | LOCATION | 0.67+ |
NBME | ORGANIZATION | 0.64+ |
APS | TITLE | 0.54+ |
Future Proof Your Enterprise | TITLE | 0.52+ |
BSC | TITLE | 0.52+ |
ike cloud | TITLE | 0.51+ |
six | OTHER | 0.39+ |
Sizzle Reel | Cisco Live US 2019
yeah I probably would use a sort of ever-changing I would say ever-expanding you know but you have to write because what we saw when we started off is roll around how to automate my datacenter how do I get a cloud experience in my data center what we see changing and okay Frank is driven by this whole app refactoring process that customers want to deploy apps maybe in the cloud maybe develop in the cloud and so they need an extension to the automated data center into the cloud and so really what you see from us is an expansion of that ACA concept you rangas point we actually really didn't change we just we're just extending it to container development platforms two different cloud environments what's the same area automate end-to-end network reach as well as the segmentation what is the right there right sorry security regime in this you know cloud era how is it evolving well I mean what we're doing is we're bringing tools like tetration which now runs on Prem and in the cloud things like stealthWatch which runs on from in the cloud and simply bringing them security frameworks that are very effective we're I think a very capable of well known security vendor but bringing them the capability to run the same capabilities in their on-prem environments and their data centers as well as in multiple public clouds and that just eliminates the seams that hackers could maybe get into it makes common policy Possible's they can define policy around an application once and have that apply across the vault environments which not only it's easier for them but it eliminates potential mistakes that they might make that might leave things open to a hacker so for us it's that simple bringing very effective common frameworks for security across all these cisco has embraced the idea of being a platform and not a siloed individual product line and so for a service provider like CenturyLink for us to be able to embrace that same philosophy of the platform of services what that means is that our engineering and field ops folks our Operations teams do all the hard work on the back end to make sure that we have established all of the right security the right network the reliability the global scalability of our specific platform of services and being that leader in telecommunications and then we're able to lay that cisco platform on top of it and what happens then from a product management level is once you've established that foundation it's really plug-and-play the customer calls and says I need calling I need meetings I need you know whatever it is they need and we build that solution and very quickly can put those components into play and get them to use the service right away so what we've done across the portfolio even in primary storage is made sure that we've done all sorts of things that help you against a ransomware a malware attack keep the data encrypted I think the key point and actually I think Silicon angle wrote about this is like some like 98% of all enterprises getting a broke it in two anyway so it's great that you've got security software on the edge with at the IBM or RSA or blue coat or checkpoint oh who cares who you buy the software from but when they're in there stealing and sometimes you know some accounts have told us they can track them down in a day but if you're a giant global fortune 500 datacenter look it may take you like a week so they can be stealing stuff right and left so we've done everything from we have right once technology right so it's immutable data you can't change it we've got encryption so if they steal it guess what they can't use it but the other thing we've done is real protection against ransomware now that's a great question in terms of modernization of infrastructure and there's some really interesting trends that I think are occurring and I think the one that's getting a lot of us is really edge computing and what we're finding is depending on the use case it can be an enterprise application where you're trying to get localization of your data it could be an IOT application where it's it's really critical for latency or bandwidth to keep compute and data close to the thing if you will or it could be mobile edge computing where you want to do thing like analytics and AI on a video stream before you tax the the bandwidth of the cellular infrastructure with that data stream so across the board I think edge is super exciting and you can't talk about edge with like I said talking about artificial intelligence another big trend whether it's running native running with an accelerator an FPGA I think we're seeing a myriad of use cases in that space but Security's in the end to your point right I've got software to find access I've got mobile access points I've got you know tetration I've got you know all of these products that are helping people that in the past they were just patching holes in the dike you know hey this happened let's put this software product here this happened let's put this in and we actually built the security practice like the last three or four years ago it's growing you know the number of people that are whether it's regulation compliance you know I got some real problem I think I've got a problem and I don't know what it is our ability to come back and sit down and say let's evaluate what your situation is so I was talking to the networking guys and so Wow enterprise networking it's up way up what's driving that the need to transform or is that you know what is it they're like a lot of times it's something are long security that's making them step back and reevaluate and then sometimes that transfer translates into an entire network refresh there are tools that people use and everybody's environments a little different so some might want to integrate in and use ansible terraform you know tools like that and so then you need code that will help integrate into that other people are using ServiceNow for tickets so if something happens integrate into that people are using different types of devices hopefully mostly Cisco but they may be other using others as well we can extend code that goes into that so it really helps to go in different areas and what's kind of cool is that our there's an amount of code that where people have the same problems you know and you know you start doing something everyone has to make the first few kind of same things in software let's get that into exchange and so let's share that there's places where partners are gonna want to differentiate keep that to yourselves like use that as your differentiated offer and then there's areas where people want to solve in communities of interest so we have we have someone who does networking and he wants to do automation he does it for power management in the utilities industry so he wants a community that will help write code that'll help for that area you know so people have different interests and you know we're hoping to help facilitate that because Cisco actually has a great community we have a great community that we've been building over the last 30 years there the network experts they're solving the real problems around the world they work for partners they work for customers and we're hoping that this will be a tool to get them to band together and contribute in a in a software kinda way they have the right reason to be afraid because so many automation was created a once user exactly was right and then you have the cost of traditional automation you have the complexity to create a network automation you guys realize that middle coordination you cannot have little automation only work on a portion of your needle you have to work on majority if not all of your needle right so that's became very complex just like a you wanna a self-driving car you can go buy a Tesla a new car you can drive on its own but if you wanna your 10 year order Toyota driving on its own richer feared that's a very complex well let's today Network automation how to deal with it you have to deal with multi vendor technology Marty years of technology so people spend a lot of money the return are very small they so they have a right to affair afraid of it but the challenge is there is what's alternative yeah I think that is one of the things that's very unique about the definite community is within the community we have technical stakeholders from small startups to really large partners or huge enterprises and when we're all here in the demo soon we're all engineers and we're all exchanging ideas kind of no matter what the scale so it becomes this great mixing of you know shared experiences and ideas and that is some of the most interesting conversations that I've actually heard this week is people talking about how maybe they're using one Cisco platform in these two very different environments and exchanging ideas about how they do that or maybe how they're using a Cisco platform with an open-source tool and then people finding value in thinking oh maybe I can do that in my environment so that part of the ecosystem and community is very interesting and then we're also helping partners find each other so we do a lot of work around you know here's a partner in the Cisco ecosystem who goes and installs Meraki networks right here's a software partner who builds mapping technology on top of indoor Wi-Fi networks and getting those two together because the software partner is not going to install the network and the network person may not write that application in that way and so bringing them together we've had a lot of really good information coming back from the community around kind of finding each other and being able to deliver those outcomes what are you guys doing Tom we'll start with you how are you guys working together to infuse and integrate security into the technologies and that from a customer's perspective those risks that dial down yeah so so we're in Cisco's integrating security across all of our product portfolio right and and that includes our data center portfolio all the way through our campus our when all those portfolios so we continue to look for opportunities to to integrate you know whether it's dual factor authentication or things like secure data center with a fire you know of highly scalable multi instance firewall in front of a data center things like that so we're we're definitely looking for areas and angles and opportunities for us to not only integrate it from a product standpoint but also ensure that we are talking that story with our customers so that they know they can they can leverage Cisco for the full architecture from a security standing on the storage of the data from an encryption perspective and as it gets moved or his mobile you know that that level of security and policy follows it you know wherever the data is secure of course enemy everybody always wants more performance they want lower cost security in many ways has begun to trump those other two attributes they've they've become table stakes security as well but security is really number one now ya talk about that talk about the major trends that you're seeing well of course of course security now is top of mine for everyone board level conversations executive level conversations all the time I think what ends up happening is in the past we would think about it as Network performance cost etc security as a tangent kind of side conversation now of course it's built into everything that we do [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
CenturyLink | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
10 year | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Frank | PERSON | 0.99+ |
ServiceNow | TITLE | 0.98+ |
two | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
four years ago | DATE | 0.97+ |
two attributes | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
a week | QUANTITY | 0.95+ |
98% | QUANTITY | 0.95+ |
Marty | PERSON | 0.95+ |
Tom | PERSON | 0.95+ |
one | QUANTITY | 0.94+ |
cisco | ORGANIZATION | 0.93+ |
RSA | ORGANIZATION | 0.92+ |
a day | QUANTITY | 0.91+ |
two different cloud environments | QUANTITY | 0.9+ |
first few | QUANTITY | 0.9+ |
ACA | TITLE | 0.89+ |
Meraki | ORGANIZATION | 0.86+ |
Silicon angle | ORGANIZATION | 0.82+ |
global fortune 500 | ORGANIZATION | 0.8+ |
2019 | DATE | 0.79+ |
things | QUANTITY | 0.77+ |
one of | QUANTITY | 0.76+ |
two very different | QUANTITY | 0.76+ |
last 30 years | DATE | 0.76+ |
number one | QUANTITY | 0.75+ |
Sizzle Reel | ORGANIZATION | 0.71+ |
money | QUANTITY | 0.68+ |
times | QUANTITY | 0.66+ |
last three | DATE | 0.62+ |
US | LOCATION | 0.61+ |
lot | QUANTITY | 0.6+ |
checkpoint | ORGANIZATION | 0.56+ |
myriad of use cases | QUANTITY | 0.55+ |
edge | TITLE | 0.54+ |
Prem | TITLE | 0.44+ |
Live | COMMERCIAL_ITEM | 0.2+ |
AI-Powered Workload Management
>> From the Silicon Angle Media Office in Boston, Massachusetts, it's the Cube. Now here's your host Stu Miniman. >> Hi, I'm Stu Miniman and welcome to the Cube's Boston area studio. This is a Cube conversation. Happy to welcome to the program first time guest Benjamin Nye, CEO of Turbonomic, a Boston-based company. Ben, thanks so much for joining us. >> Stu, thanks for having me. >> Alright Ben, so as we say, we are fortunate to live in interesting times in our industry. Distributed architectures are what we're all working on, but at the same day, there's a lot of consolidation going on. You know, just put this in context. Just in recent past, IBM spent 34 billion dollars to buy Red Hat. And the reason I bring that up is a lot of people talk about you know, it's a hybrid multi-cloud world. What's going on? The thing I've been saying for a couple of years is as users, two things you need to watch. Care about their data an awful lot. That's what drives businesses. And what drives the data really? It's their applications. >> Perfect. >> And that's where Turbonomic sits. Workload automation is where you are. And that's really the important piece of multi-cloud. Maybe give our audience a little bit of context as to why this really, IBM buying Red Hat fits into the general premise of why Turbonomic exists. >> Super. So the IBM Red Hat combination I think is really all about managing workloads. Turbonomic has always been about managing workloads and actually Red Hat was an investor, is an investor in Turbonomic, particularly for open stack, but more importantly open shift now. When you think about the plethora of workloads, we're gonna have 10 to one number of workloads relative to VMs and so worth when you look at microservices and containers. So when you think about that combination, it's really, it's an important move for IBM and their opportunity to plan hybrid and multi-cloud. They just announced the IBM multi-cloud manager, and then they said wait a minute, we gotta get this thing to scale. Obviously open shift and Red Hat is scale. 8.9 million developers in their community and the opportunity to manage those workloads across on-prim and off in a cloud-native format is critical. So relate that to Turbo. Turbo is really about managing any workload in any environment anywhere at all times. And so we make workloads smart, which is self-managing anywhere real time, which allows the workloads themselves to care for their own performance assurance, policy adherence, and cost effectiveness. And when you can do that, then they can run anywhere. That's what we do. >> Yeah, Ben, bring us inside of customers. When people hear applications and multi-cloud, there was the original thing. Oh well, I'm gonna be able to burst to the cloud. I'm gonna be moving things all the time. Applications usually have data behind them. There's gravity, it's not easy to move them. But I wanna be able to have that flexibility of if I choose a platform, if I move things around, I think back to the storage world. Migration was one of the toughest things out there and something that I spent the most time and energy to constantly deal with. What do you see today when it comes to those applications? How do they think about them? Do they build them one place and they're static? Is it a little bit more modular now when you go to microservices? What do you see and hear? >> Great, so we have over 2,100 accounts today including 20% of the Fortune 500, so a pretty good sample set to be able to describe this. What I find is that CIOs today and meet with many of them, I want either born in the cloud, migrate to the cloud, or run my infrastructure as cloud. And what they mean is they want, they're seeking greater agility and elasticity than they've ever had. And workloads thrive in that environment. So as we decompose the applications and decompose the infrastructure and open it up, there's now more places to run those different workloads and they seek the flexibility to be able to create applications much more quickly, set up environments a lot faster, and then they're more than happy to pay for what they use. But they get tired of the waste candidly of the traditional legacy environments. And so there's a constant evolution for how do I take those workloads and distribute them to the proper location for them to run most performantly, most cost effectively, and obviously with all the compliance requirements of security and data today. >> Yeah, I'm wondering if you could help connect the dots for us. In the industry, we talk a lot about digital transformation. >> Yeah. >> If we said two or three years ago was a lot of buzz around this, when I talk to N users today, it's reality. Absolutely, it's not just, oh I need to be mobile and online and everything. What do you hear and how do my workloads fit into that discussion? >> So it's an awesome subject. When you think about what's going on in the industry today, it's the largest and fastest re-platforming of IT ever. Okay, so when you think about for example at the end of 2017, take away dollars and focus on workloads. There were 220 million workloads. 80% were still on prim. For all the growth in the cloud, it was still principally an on prim market. When you look now forward, the differential growth rates, 63% average growth across the cloud vendors, alright, in the IAS market. And I'm principally focused on AWS and Ajur. And only 3% growth rate in the on premise market. Down from five years ago and continuing a decline because of the expense, fergility, and poor performance that customers are receiving. So the re-platforming is going on and customers' number one question is, can you help me run my workloads in each of these three environments? So to your point, we're not yet where people are bursting these workloads in between one environment and another. My belief is that will come. But in today's world, you basically re-platform those workloads. You put them in a certain environment, but now you gotta make sure that you run them well performantly and cost effectively in those environments. And that's the digital transformation. >> Okay. So Ben, I think back to my career. If I turn back the clock even two decades, intelligence, automation, things we were talking about, it's different today. When I talk to the people building software, re-platforming, doing these things today, machine learning and AI, whatever favorite buzzword you have in that space is really driving significant changes into this automation space. I think back to early days of Turbonomic. I think about kinda the virtualization environments and the like. How does automation intelligence, how is it different today than it was say, when the company was founded? >> Wow. Well so for one, we've had to expand to this hybrid and multi-cloud world, right? So we've taken our data model which is AI ops, and driven it out to include Ajur and AWS. But the reason would say why. Why is that important? And ultimately, when people talk about AI ops, what they really mean whether it's on prim or off, is resource-aware applications. I can no longer affect performance by manually running around and doing the care and feeding and taking these actions. It's just wasteful. And in the days where people got around that by over-provisioning on prim sometimes as much as 70 or 80% if you look at the resource actually used, it was far too expensive. Now take that to the cloud, to the public cloud, which is a variable cost environment and I pay for that over-provisioning every second of the rest of my life and it's just prohibitive. So if I want to leverage the elasticity and agility of the cloud, I have to do it in a smarter measure and that requires analytics. And that's what Turbonomic provides. >> Yeah and actually I really like the term AI ops. I wonder if you can put a little bit of a point on that because there are many admins and architects out there that they hear automation and AI and say, oh my gosh, am I gonna be put out of a job? I'm doing a lot of these things. Most people we know in IT, they're probably doing way more than they'd like to and not necessarily being as smart with it. So how does the technology plus the people, how does that dynamic change? >> So what's fascinating is if you think about the role of tech, it was to remove some of the labor intensity in business. But when you then looked inside of IT, it's the most labor intensive business you can find, right? So the whole idea was let's not have people doing low value things. Let's do them high value. So today when we virtualize an unpremised estate, we know that we can share it. Run two workloads side by side, but when a workload spikes or a noisy neighbor, we congest the physical infrastructure. What happens then is that it gets so bad that the application SLA breaks. Alerts go off and we take super expensive engineers to go find hopefully troubleshoot and find root cause. And then do a non-disruptive action to move a workload from one host to another. Imagine if you could do that through pure analytics and software. And that's what our AI ops does. What we're allowing is the workloads themselves will pick the resources that are least congested on which to run. And when they do that rather than waiting for it to break and then try and fix it people, we just let it take that action on its own and trigger a V motion and put it into a much happier state. That's how we can assure performance. We'll also check all the compliance and policies that govern those workloads before we make a move so you can always know that you're in keeping with your affinity-in affinity rules, your HADR policies, your data sovereignty, all these different myriad of regulations. Oh and by the way, it'll be a lot more cost effective. >> Alright, Ben, you mentioned V motion. So people that know virtualization, this was kind of magic when we first saw it to be able to give me mobility with my workloads. Help modernize us with cubernetties. Where does that fit in your environment? How does multi-cloud world, as far as I see, cubernetties does not break the laws of physics and allow me to do V motion across multi-clouds. So where does cubernetties fit in your environment? And maybe you can give us a little bit of compare contrast of kinda the virtualization world and cubernetties, where that fits. >> Sure, so we look at containers or the pods, a grouping of containers, as just another form of liquidity that allows workloads to move, alright? And so again we're decomposing applications down to the level of microservices. And now the question you have to ask yourself is when demand increases on an application or on indeed a container, am I to scale up that container or should I clone it and effectively scale it out? And that seems like a simple question, but when you're looking at it at huge amounts of scale, hundreds of containers or pods per workload or per VM, now the question is, okay, whichever way I choose, it can't be right unless I've also factored the imposition I'm putting on the VM in which that container and or pod sits. Because if I'm adding memory in one, I have to add it to the other 'cause I'm stressing the VM differentially, right? Or should I actually clone the VM as well and run that separately? And then there's another layer, the IAS layer. Where should that VM run? In the same host and cluster and data center if it's on prim or in the same availability zone and region if it's off prim? Those questions all the way down the stack are what need to be answered. And no one else has an answer for that. So what we do is we instrument a cubernetties or an open shift or even on the other side a cloud foundry and we actually make the scheduler live and what we call autonomic. Able to interrelate the demand all the way down through the various levels of the stack to assure performance, check the policy, and make sure it's cost effective. And that's what we're doing. So we actually allow the interrelationship between the containers and their schedulers all the way down through the virtual layer and into the physical layer. >> Yeah, that's impressive. You really just did a good job of explaining all of those pieces. One of the challenges when I talk to users, they're having a real hard time keeping up. (laughing) We said I've started to figure out my cloud environment. Oh wait, I need to do things with containers. Oh wait, I hear about the server-less thing. What are some of the big challenges you're hearing from customers? Who do they turn to to help them stay on top of the things that are important for their business? >> So I think finding the sources of information now in the information age when everything has gone to software or virtual or cloud has become harder. You don't get it all from the same one or two monolithic vendors, strategic vendors. I think they have to come to the Cube as an example of where to find this information. That's why we're here. But I think in thinking about this, there's some interesting data points. First on the skills gap, okay, Accentra did a poll of their customer base and found that only 14% of their customers thought they had the requisite skills on staff to warrant their moves to the cloud. Think about that number, so 86% don't. And here's another one. When you get this wrong, there's some fascinating data that says 80% of customers receive a cloud bill north of three times what they expected to spend. Now just think about. Now I don't know which number's bigger frankly, Stu. Is it the 80% or the three times? But there's the conversation. Hey, boss, I just spent the entire annual budget in a little over a quarter. You still wanna get that cup of coffee? (laughing) So the costs of being wrong are enormously expensive. And then imagine if I'm not governing the policies and my workloads wind up in a country that they're not meant to per data sovereignty. And then we get breached. We have a significant problem there from a compliance standpoint. And the beauty is software can manage all this and automation can help alleviate the constrain of the skills gap that's going on. >> Yeah, you're totally right. I think back to five years ago, I was at Amazon Reinvent. And they had a tool that started to monitor a little bit of are you actually using the stuff that you're paying for? And there were customers walking out and saying, I can save 60 to 70% over what I was doing. Thank you Amazon for helping to point that out. When I lived on the data center side and vendors that sold stuff, I couldn't imagine if your sales rep came and said, hey, we deployed this stuff and we know you spent millions of dollars. It seems like we over-provisioned you by two to three x what you expected. You'd be fired. So it was like in Wall Street. Treats Amazon a little bit differently than they do everybody else. So on the one hand, we're making progress. There's lots of software companies like yourself. There's lots of companies helping people to optimize their cost on there. But still, this seems like there's a long way to go to get multi-cloud and the cost of what's going on there under control. Remember the early days? They said cloud was supposed to be simple and cheap and turned out to be neither of those. So Ben, I want to give you the opportunity. What do you see both as an industry and for Turbonomic, what's the next kinda six to 12 months bring? >> Good, can I hit your cloud point first? It's just when you think of Amazon, just to see how the changes. If I go and provision a workload in Amazon EC2 alone, there's 1.7 million different combinations from which I can choose across all the availability zones, all the regions, and all the services. There's 17 families who compute service alone as just one example. So what Amazon looks at Turbonomic and says, you're almost a customer control plane for us. You're gonna understand the demand on the workload, and then you can help the customer, advise the customer which service, which instance types, all the way down through not just compute and memory, but down into network and storage are the ones that we should do. And the reason we can do this so cost effectively is we're doing it on a basis of a consumption plan, not an allocation plan. And Amazon as a retailer in their origin, has cut prices 62 times, so they're very interested in using us as a means of making their customers more cost effective so that they're indeed paying for what they use, but not paying for what they don't use. They've recognized us as giving us the migration tools competency, as well as the third party cloud management competencies that frankly are very rare in the marketplace. And recognize that those are because production apps are now running at Amazon like never before. Ajur, Microsoft Ajur is not to be missed on this one, right? So they've said we too wanna make sure that we have cost effective operations. And what they've described is when a customer moves to Ajur, that's a Ajur customer at ACA. But then they need to make sure that they're growing inside of Ajur and there's a magic number of 5,000 dollars a month. If they exceed that, then they're Ajur for life, okay? The problem becomes if they pause and they say, wow this is expensive or this isn't quite right. Now they just lost a year of growth. And so the whole opportunity with Ajur and they actually resell our assessment products for migration planning as well as the optimization thereafter. And the whole idea is to make sure again customers are only paying for what they use. So both of these platforms in the cloud are super aggressive with one another, but also relative to the un-prim legacy environments to make sure that the workloads are coming into their arena. And if you look at the value of that, they round numbers about three to 6,000 dollars a year per workload. We have three million smart workloads that we manage today at Turbonomic. Think what that's worth in the realm of the prize at the public cloud vendors and it's a really interesting thing. And we'll help the customers get there most cost effectively as they can. >> Alright, so back to looking forward. Would love to hear your thoughts on just what customers need broadly and then some of the areas that we should look for Turbonomic in the future. >> Okay, so I think you're gonna continue to see customers look for outlets for this decomposed application as we've described it. So microservices, containers, and VMs running in multiple different environments. We believe that the next one, so today in market we have STDC, the software defined data center and virtualization. We have IAS and PASS in the public and hybrid cloud worlds. The next one we believe will be as applications at the edge become less pedestrian, more strategic and more operationally intensive, then you're talking about Amazon Prime delivery or your driverless cars or things along those lines. You're going to see that the edge really is gonna require the cell tower to become the next generation data center. You're gonna see compute memory and storage and networking on the cell tower because I need to process and I can't take the latency of going back to the core, be it cloud core or on premise core. And so you'll do both, but you'll need that edge processing. Okay, what we look at is if that's the modern data center, and you have processing needs there that are critical for those applications that are yet to be born, then our belief is you're gonna need workload automation software because you can't put people on every single cell tower in America or the rest of the world. So, this is sort of a confirming trend to us that we know we're in the right direction. Always focus on the workloads, not the infrastructure. If you make the application workloads perform, then the business will run well regardless of where they perform. And in some environments like a modern day cell tower, they're just not gonna be the opportunity to put people in manual response to a break fix problem set at the edge. So that's kinda where we see these things headed. >> Alright, well Ben Nye, pleasure to catch up with you. Thanks so much for giving us the update on where the industry is and Turbonomic specifically. And thank you so much for watching. Be sure to check out theCube.net for all of our coverage. Of course we're at all the big cloud shows including AWS Reinvent and CubeCon in Seattle later this year. So thank you so much for watching the Cube. (gentle music)
SUMMARY :
in Boston, Massachusetts, it's the Cube. Happy to welcome to the program first time guest And the reason I bring that up is a lot of people talk about And that's really the important piece of multi-cloud. and the opportunity to manage those workloads and something that I spent the most time and energy and then they're more than happy to pay for what they use. In the industry, we talk a lot about digital transformation. and how do my workloads fit into that discussion? And that's the digital transformation. and the like. And in the days where people got around that Yeah and actually I really like the term AI ops. it's the most labor intensive business you can find, right? compare contrast of kinda the virtualization world And now the question you have to ask yourself is One of the challenges when I talk to users, And the beauty is software can manage all this So on the one hand, we're making progress. And the reason we can do this so cost effectively Turbonomic in the future. and I can't take the latency of going back to the core, And thank you so much for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Ben Nye | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Benjamin Nye | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
America | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
1.7 million | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
220 million | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
Ajur | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
62 times | QUANTITY | 0.99+ |
17 families | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Accentra | ORGANIZATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Seattle | LOCATION | 0.99+ |
86% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Turbonomic | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
3% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
three million | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
34 billion dollars | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two | DATE | 0.99+ |
end of 2017 | DATE | 0.99+ |
five years ago | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
14% | QUANTITY | 0.98+ |
over 2,100 accounts | QUANTITY | 0.98+ |
two decades | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
today | DATE | 0.98+ |
Wall Street | LOCATION | 0.98+ |
one example | QUANTITY | 0.98+ |
8.9 million developers | QUANTITY | 0.98+ |
12 months | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
two things | QUANTITY | 0.97+ |
70% | QUANTITY | 0.97+ |
three years ago | DATE | 0.97+ |
a year | QUANTITY | 0.96+ |
Silicon Angle Media Office | ORGANIZATION | 0.96+ |
CubeCon | EVENT | 0.96+ |
Prime | COMMERCIAL_ITEM | 0.96+ |
later this year | DATE | 0.95+ |
Billy Southerland, IronRoad | Inforum DC 2018
(upbeat music) >> Live from Washington D.C., it's TheCUBE. Covering InForum, D.C. 2018. Brought to you by Infor. >> Well, good morning and welcome to day two here on theCUBE at Inforum 2018. We are in the nation's capital, the Walter Washington Convention Center, and thank goodness the sun's come out today. Everybody's got big smile and cheery faces, it's good to see. Dave Vellante, John Walls here. We're just on top of the show floor. You'll see a lot of activity a little bit later on in the day. And it's a pleasure to welcome our first guest of the day, Billy Southerland who's the CEO of IronRoad. Billy, good morning to you. >> Good morning, thank you guys for having me on. >> Great to see you. >> Yeah, great to see you. >> How's the show been for you so far? >> It's been great. Yeah, it's been great. Outside of the fact that we got bumped from our hotel when we first showed up so (chuckles) No, but show's been fantastic, always great to network, learn what other folks have going on and yeah, been phenomenal. >> Tell us about IronRoad. What you do and why you're here. >> Yeah, so we're an HR and outsourcing company. And we've been doing HR and payroll since 1997. Company started really just with an idea. So as we have grown through the years, working with mostly small to medium size businesses, we had an opportunity with Infor just a couple years ago to partner with them on the payroll side of things. And so it's been a new opportunity for us, one that our team is incredibly excited about. Just great opportunity to partner with some phenomenal software and so yeah, that's-- >> So, services that you guys provide, so HR, payroll, you've got a portal, onboarding. Take us through that. Is that full suite of-- full complement of services? >> It is, yeah. So our typical client is a smaller to medium sized employer and we'll go in and so many of the things that they've got to do internally that have nothing to do with why they got into business, they can outsource to us. So, anything from the beginning to the end of an employee's life cycle is what we manage. You name it and we do it for them so that they can go and focus on what they do. >> So let me probe that a little bit. So if I have-- let's say I have an HR issue with an employee. Maybe they're a little older and I'm concerned that I am going through the right steps giving that employee the right guidance. I don't want to expose my company to any lawsuits or whatever. Can I call you up and say, hey, give me some guidance on how I should handle this from an HR perspective? What do I have to document? You would help me with that? >> David, that's the perfect example, right? And so the whole liability of being an employer is something that they can share with us, right? So, somebody that focuses on HR knows those laws and rules and regs. They can pick up the phone, they call us, they say, hey Billy, got an issue, can you come out? One of our folks will go out, consult with them, make sure that everything's documented, managed properly. And yeah, that's exactly what we would do. >> Okay, so with healthcare compliance, Obamacare, PTO policies. I'm a small company. I want to make sure that I'm not killing my cash flow with balance sheets stuff. I mean all that stuff, you can help with? >> You got it. Yeah, absolutely. You bring up healthcare. I don't know any employee, employer regardless of the size who's not dealing with that, right? So the whole ACA compliance with Obamacare has been a tremendous boom for our business because people are looking left and right, how do we deal with this? What do we do? It's so complex for them, they're looking for experts to manage it. >> I mean that's kind of the tip of the spear. That's why, particularly small, mid-size businesses, it's healthcare first because it's so expensive and it's so important to the employees, right? >> It is, yeah and I would say most folks that we deal with it's number two line item right after payroll, right? I mean they're dealing with healthcare and everybody's looking for answers. It's like, how do we do this? And the employees are asking the same question, right? And they're looking at the employers saying, give me a solution. There is no real solution outside of being able to maybe aggregate with some other smaller employers so we can go to the large healthcare companies that are out there and say, okay I tell you what, we got about 5,000 people here now. What do you think about our buying power at this point? >> You get some scale and then do the works. >> That's it, you just scale it, exactly right. >> Okay, 1997. Well, first of all, you're Cincinnati-based, I'll come back and talk about that. But 1997, just coming into the dot come boom, the state of software was, back then PeopleSoft was the gold standard. There was no cloud, really, you had these software companies doing, forget what they even called it now, but it was like software as a service pre-SAS. Kind of clunky software and now you fast-forward to today, you know, you're all cloud, you're agile but so how'd you get started? Take us through kind of the technology progression. >> Yeah, so the start was an interesting one. I wish we could tell you we had a great idea but it was a complete accident, right? We were trying to, I was trying to help out two different friends who were in two separate businesses. They both had done extremely well in their separate businesses. So they started what is now IronRoad and after about 12 months, both of them had done so well in their other businesses, they looked at each and said, they each thought the other one was going to be pulling the wagon, right? And so neither one of them wanted to do it. So one of the guys came to me and said, hey Billy, you want to buy 50% of this? And I said, well, what is it? And he explained it to me and I said, I love this concept, it's a great idea. And so I said, how much? He said, $8,000. (laughing) >> It's like a lawnmower. >> I bought half a lawnmower, right? >> Such a great idea, you sure you don't want to charge more? >> Yeah, I said, $8,000? But he had no clients, right? They had a little bit of software that they purchased to be able to do the payroll. So that's really where we started. So kind of caveman like you said, David. And so-- >> What's your client base now? What do you have? >> So we're using the Infor Cloud base. The human management capital system. >> As far as the number of organizations that you're serving. How have you grown the business? >> Pardon. Yeah, so you know really, it's just been good old-fashioned hard work for us. We've not made any purchases, no acquisitions. And so we got some amazing people that have a real passion about what we do and we do it really well. The differentiator between us and some of the big guys that are out there really is our people. Your people talk about that but our people are really focused on it. So you know-- and pretty soon, that reputation begins to spread. Like you said, we're in Cincinnati, Ohio and currently we're operating in 38 different states. So little bit at a time, year after year, we've been digging and digging and digging. In regards to the question you asked, David, right? So we start with the lawnmower and here we end up sitting with you guys talking about Infor and this cloud-based suite that we've been able to manage and bring in and so really exciting for somebody like us. >> So talk a little bit more about the CloudSuite, how you use it, how you use it to differentiate from the competition, you know why it's maybe better than some of the other alternatives you see? >> That's a great question. Because most our businesses' professional employer organization. Most of the PEO softwares are fairly limited in what they can offer the employers that they're working with. And so we vetted, we had Anka Kalp... Our CIO was vetting five different systems a couple years ago. And in the midst of vetting those five different systems, we were introduced to Infor, right? As we began to see what this software could do, we started getting really excited. You talk about a differentiator in the workplace, nobody else has it, right? And so we started learning more and more the human capital management system for us, we started thinking, man if we could take this to employees-- employers, that have anywhere between 500 and 5,000 employees, this is a real differentiator for us, right? And so nope, like I said, nobody else in the PEO space has this software and it's been a tremendous opportunity for us to take to the marketplace. >> So that's kind of your sweet spot, 500 to 5,000? So not under 100, right? True SMB is kind of not your sweet spot? >> Well, actually we'll go all the way down to 20 employees. But the 20 employer companies, the resources that they have internally to be able to integrate the systems is a little more challenging. But we get it done. And so anywhere between 20 and probably 5,000 employees are the typical employer that we're working with. >> So what kind of integration items does a customer have to think about, specifically? >> So by integration-- >> You said, small companies don't have the resources to do the integration so what has to be done to do that integration? >> Yeah, so it's a lot of lifting, right? I mean, there's lots of work to be able to establish the systems with the employers that we're taking, you know, the software to. Just a lot of hands on between IronRoad and the companies that we're dealing with so the smaller companies are really focused on, you know, going out and doing whatever it is whether they're contractor, doctor's office. So to be able to have a resource that can dedicate the time, to be able to activate the system and make it do what they want it to do is somewhat challenging for the smaller employers. >> But wouldn't they have to do that with any outsource HR provider? >> They would. They may not be able-- they probably are not taking the software to the depth of its utilization or potential utilization. So they're kind of doing without it. >> So the bigger guy's getting more business value out of your offer. >> There's no doubt about it or the smaller guys, it just takes a little bit longer to get 'em there. That's really the challenge. They both get the same value, just takes a little bit longer. >> 21 years you been doing this. So, you've obviously seen business change. >> Not that old, I don't know how that happened. >> Well, you started very young. (laughing) >> I'm glad you said that. I wondered why they skipped me with the makeup. I thank you guys. >> Don't need it. We do. (laughs) So you been 21 years. >> Yes. >> So you've seen business change, right? >> Yes. >> You've seen technology change, right? >> Oof, night and day. >> So where now? Where are the pain points now? Because it seems like, oh we've solved all these problems, right? Automation, things are much easier. Well, there's always a, yeah, but. So what's the but now for your folks? >> Yeah, I think the biggest thing for us in our industry is getting the message out. When we look at PEOs in Ohio, for example, about 2% of the workforce is working with the PEO. Because they're so few of 'em out there doing it really well, getting that message out to the employer because once we get 'em, once they come in and they see, you know you said they got to do this if they're outsourcing HR anyhow. Once they become aware of what's available to them, they don't leave, right? >> So their pain's still the same. >> Pain's still the same. >> You're just trying to get out, to let them know, you can help. >> That's it, that's it. I think that's probably our biggest pain point is how do you get this message out and different parts of the country, obviously, you've got different attitudes towards or people move at different paces. In Ohio, there's still, I'm looking at David saying, what is PEO? I've never heard of it. I don't know if I trust you. And so overcoming that is probably our biggest obstacle. >> Billy, you talk a little bit about Infor, it's products. If I understand it correctly, you're both a consumer and essentially a reseller of the services, which means you're running on the Amazon Cloud so talk about your relationship there, why Infor, why the product, how does it compare? Because you probably evaluated everything. >> We did, yeah. Yeah, we did. You know, for us, like I said, we vetted five different companies that we were looking at. And when we had a chance to look at the Infor proposal, the differentiator for us not only was the software, from our perspective, far and above better than anything else that we were looking at. They provided us with an opportunity since we were purchasing the software to be able to provide an in-tenant solution for current clients that Infor has. So an Infor client that looks at the software and says, hey, I want this, and yet they're still outsourcing their payroll, now has the ability to buy the software and outsource the payroll to IronRoad. And so you're taking the best in class cloud suite services from a human capital management system to the marketplace. And partnering with a company like Infor that really is a dream come true for us. >> So what makes it best in class? I mean, you know, Oracle's got good software. You got SAP out there, Workday's the hot company. Why is Infor, you said, better? Why is it better? >> Yeah, I think for us, just the ease of the employer being able to utilize the system. You can have the best thing in the world and people are people are people are people, right? They got to be able to get on there and use the stuff. And so I think the ease of being able to just the user-friendly side of what Infor does. They certainly have every option you can imagine. The capability, the software is as good, if not better, than any. But the ability for people to pick it up quickly and be able to use and make it real for their small business, to me that's the key, right? >> Was the use of AWS Cloud a factor? >> Um... >> Was that kind of transparent to you? >> Yeah, yeah, yeah. Not really. Yeah, yeah. >> Is there an aha moment when you're out there when you are pitching? And when you look up people and the processes they go through and they been doing it the same way for decades? So when you break through, how do you know you've broken through? What is it that you use to break through? >> Yeah, yeah, yeah and for them, once we're able to articulate what this system actually does, there is an aha moment. And it's almost disbelief. It's because there's so many years of doing it the old way, right? And then they look and see it's kind of like me looking at the software that your company's created that was phenomenal, right? They're looking at it and go, come on, really? It really does that? And it's, yeah, it really does that. (chuckles) And we can do this different and you can go sell more widgets, right? >> Showing Billy our video search software, so I appreciate that. >> Amazing! I mean, it's unbelievable. >> It is. >> Yeah, Star Trek. >> So I want to ask-- >> Baiting myself. >> We're all in the same boat. >> I want to ask you about the resources that are required for you to do integration with Infor. Actually, so outside funding, other than the $8,000 that you put in, have you guys raised outside funding? >> David, that was a lot of money at the time, man. >> Yeah, no doubt. >> (laughs) A lot of money. >> You could do a lot with $8,000, but you can't build a full software suite so have you taken outside capital, or? >> We haven't. >> So, self-funded. >> Yeah, we're self-funded and frankly, fortunately, we've been able to manage through it. This partnership with Infor for us is a big big step for us, right? But at this point, we've been able to manage that without any funding outside and... >> Okay so it's not like an intense engineering effort, right? You're turnkey-ing this stuff largely. So you put more of your effort on onboarding clients from what I understand, right? >> Right and working with other Infor partners. Bails, for example, was our implementation manager and so our folks working with Bails to make sure because we've got hundreds of clients that in lots of different industries that we've got to go out and roll this implementation out into, right? And so it's a little different than the typical Infor arrangement because they're so many different industries represented just through IronRoad. >> And you guys dog food this? They don't like when I say dog food. Do you drink your own champagne? So you're utilizing your-- >> Much better. (chuckles) >> You're utilizing the Infor software in-house, correct? >> We are, we are, yeah, yeah. If, you know, from an implementation standpoint, easy to do that, right? You have somebody like Bails and Cyndian that has helped us, phenomenal at what they do, great partners for Infor. But then we've got to turn around and take that out to hundreds of different employers. So scaling that is a bit of a challenge. And again, depending upon the amount of resources that the different clients have, which all changes depending upon their size. But it's been great, yeah. So far so good, thank you so much Yeah, appreciate it. >> Well, Billy thanks for your time. We do appreciate it and I assume at Cincinnati, that you might be one of those long-suffering Bengals fans. >> Hey, time out! Hey, two in one. >> I know. >> Two in one, Andy Dalton. We're not big Carolina fans right now. >> One in two here in New England. >> You guys are trouble. >> Well, we'll see after this week. >> The 40-something maybe hit the big Tom. >> Alright. That discussion to continue off the air. Billy Southerland, IronRoad CEO. >> Thank you guys so much, yeah, enjoyed it. >> We'll continue. We are live here in Washington D.C. at Inforum 2018. Back with more on theCUBE in just a bit. (electronic music)
SUMMARY :
Brought to you by Infor. and thank goodness the sun's come out today. Outside of the fact that we got bumped from our hotel What you do and why you're here. Just great opportunity to partner with some So, services that you guys provide, so HR, payroll, so that they can go and focus on what they do. giving that employee the right guidance. And so the whole liability of being an employer I mean all that stuff, you can help with? So the whole ACA compliance with Obamacare and it's so important to the employees, right? And the employees are asking the same question, right? and then do the works. you just scale it, exactly right. Kind of clunky software and now you fast-forward to today, So one of the guys came to me and said, So kind of caveman like you said, David. So we're using the Infor Cloud base. As far as the number of organizations that you're serving. In regards to the question you asked, David, right? And so nope, like I said, nobody else in the PEO space the resources that they have internally to be able to So to be able to have a resource that can dedicate the time, they probably are not taking the software to the depth So the bigger guy's getting more business value They both get the same value, 21 years you been doing this. Well, you started very young. I thank you guys. So you been 21 years. Where are the pain points now? getting that message out to the employer to let them know, you can help. And so overcoming that is probably our biggest obstacle. and essentially a reseller of the services, So an Infor client that looks at the software and says, I mean, you know, Oracle's got good software. But the ability for people to pick it up quickly Yeah, yeah, yeah. And we can do this different and you can go so I appreciate that. I mean, it's unbelievable. the $8,000 that you put in, But at this point, we've been able to manage that So you put more of your effort on onboarding clients in lots of different industries that we've got to go out And you guys dog food this? (chuckles) So far so good, thank you so much that you might be one of those long-suffering Bengals fans. Hey, two in one. Two in one, Andy Dalton. That discussion to continue off the air. Back with more on theCUBE in just a bit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Marta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris Keg | PERSON | 0.99+ |
Laura Ipsen | PERSON | 0.99+ |
Jeffrey Immelt | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chris O'Malley | PERSON | 0.99+ |
Andy Dalton | PERSON | 0.99+ |
Chris Berg | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Paul Forte | PERSON | 0.99+ |
Erik Brynjolfsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew McCafee | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Cheryl | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Marta Federici | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Wright | PERSON | 0.99+ |
Maureen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cheryl Cook | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$8,000 | QUANTITY | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
30,000 | QUANTITY | 0.99+ |
Mauricio | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Robb | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mike Nygaard | PERSON | 0.99+ |
John Hodgson, Optum Technology - Red Hat Summit 2017
>> (Narrator) Live, from Boston, Massachusetts it's theCUBE, covering Red Hat Summit 2017, brought to you by Red Hat. >> Welcome back to Boston everybody, this is Red Hat Summit, and this is theCUBE, the leader in live tech coverage. I'm Dave Vellante, with my cohost Stu Miniman, and John Hodgson is here, he's the Senior Director of IT Program Management at Optum technology. John good to see ya. >> Good, it's good to be here. >> Fresh off the keynote, we were just talking about the large audience, a very large audience here. And Optum, you described a little bit at the keynote what Optum is with healthcare, sort of technology arm. Which is not super common but not uncommon in your world. But describe Optum and where it fits. >> So in the grand scheme of things within UnitedHealth Group you know, we have the parent company, of course, you know the Health Group, our insurance side, that does insurance, whether it's public sector for large corporations, as well as community and state government type work as UnitedHealthcare. They do all that, and then Optum is our technology side. We do really all the development, both for supporting UHC as our main customer, you know, they're truly our focus, but we also do a lot of commercial development as well for UnitedHealthcare's competitors. So big, big group, as I mentioned in the keynote. Over 10,000 developers in the company, lots of spend, I think in the last year our, just internal IT budget was like $1.2 billion in just IT development capital. So it's huge. >> Dave: Mind-boggling. >> John, you've got that internal Optum Cloud, Can you give us just kind of the breadth and depth, you said 1.2 billion, there. What is that make up, what geographies does that span, how many people support that kind of environment? >> As far as numbers of people supporting it, I think we've got a few hundred in our Enterprise Technology Services Group, that supports Optum Cloud. We started Optum Cloud probably a half a dozen years ago, and it's gone through its different iterations. And part of my job right now is all about Enterprise Cloud adoption and migration. So, we started with our own environment, we call it UCI, United, it was supposed to be Converged Infrastructure, but I call it our Cloud Infrastructure, that's really what it is. And we've continued to enhance that. So over the last few years, I think about 3.5, four years ago, we brought in Red Hat and OpenShift. We're on our third iteration of OpenShift. Very, very stable platform for us now. But we also have Azure Stack in there as well, I think even as Paul and those guys mentioned in the keynote there's a lot of different things that you can kind of pull from each one of the technology providers to help support what we're doing, kind of take the best of breed from each one of them, and use them in each solution. >> Organizations are always complaining that they spend all this money on keeping the lights on, and they're trying to make the shift, and obviously Cloud helps them do that, and things like OpenShift, etc. What's that like in your world? How much of your effort is spent on maintenance and keeping the lights on? Sounds like you got a lot of cool, new development activity. Can you describe that dynamic for us? >> Yeah, we've got a really good support staff. Our group, SSMO, when we build an application, they kind of take it back over and run everything. We've got a fabulous support team in the background. And to that end, and it's on both sides, right? We have our UnitedHealthcare applications that we build that have kind of their own feature set, because of what it's doing internally for us, versus what we do on the OptumInsight side, where it's more commercial in nature. So they have some different needs. Some of the things that we're developing, even for Cloud Scaffolding that I mentioned in the keynote. We're kind of working on both sides of the fence, there, to hit the different technologies that each one of them really need to be successful, but doing it in a way that it doesn't if you're on one side of the fence or the other, it's a capability that everybody will be able to use. So if there's a pattern on one side that you want to be able to use for a UHC application, by all means, go ahead and grab it, take it. And a lot of what we're doing now is even kind of crowdsourcing things, and utilizing the really super intelligent people that we have, over 10,000 developers. And so many of them, we've got a lot of legacy stuff. So there's some old-school guys that are still doing their thing, but we've got a lot of new people. And they want to get their hands on the new fresh stuff, and experience that. So there's really a good vibe going on right now, with how things are changing, all the TDP folks that we're bringing in. A lot of fresh college grads and things. And they love to see the new technologies, whether it's OpenShift or whatever. Lot are really getting into DevOps, trying to make that change in a big organization is difficult, we got a little ways to go with that. But that's kind of next up. >> You're an interesting case study, because you've got a lot of the old and a lot of cool innovation going on. And is it, how do you decide when to go, because DevOps is not always the answer. Sometimes waterfall is okay, you know. So, how do you make that determination, and where do you see that going? >> That's a great question, that's actually part of what my team does. So my specific team is all about Cloud adoption and migration, so our charter is really to work across the enterprise. So whether it's OptumInsight, OptumRx, UnitedHealthcare, we are working with them to evaluate their portfolios of applications to figure out legacy applications that we have that are still strategic. They've got life in them, they've got business benefit. And we want to be able to take advantage of that, but at the same time there's some of these monolithic applications that we look at how can we take that application, decompose it down into microservices and APIs, things like that, to make it available to other applications that maybe are just greenfield, are coming out now, but still need that same technology and information. So that's really what my team is doing right now. So we sit down with those teams and go through an analysis, help them develop a road map. And sometimes that road map is two or three years long. Getting to fully cloud from where they're at right now in some of these legacy applications is a journey. And it costs money, right? There's a lot of budget concerns and things like that that go with it. So that's part of what we helped develop is a business case for each one of those applications that we can help support them going back, and getting the necessary capital to do the cloud migrations and the improvements, and really the modernization of their applications. We started the program a couple of years ago and found that if you want to hang your hat on just going from old physical infrastructure, some of the original VMs that we had. And just moving over to cloud infrastructure, and whether that's UCI, OpenShift, Azure, whatever. If you're going to do your business case on that, you're going to be writing a lot of business cases before you get one approved. It's all about modernizing the applications. So if you fold in the move to new infrastructure, cloud infrastructure, along with the ability to modernize that application, get them doing agile development, getting down the DevOps path, looking at automated testing, automated deployment, zero downtime deployments. All of those things, when you add them up together and say, okay, here's what your real benefit looks like. And you're able to present that back to the business, and show them speed to market, speed to value is a new metric that we have. Getting things out there quickly. We used to do quarterly releases, or even biannual releases. And now we're at monthly, weekly, some of our applications that are more relatively new, Health4Me, if you go to the App Store, that's kind of our big app on the App Store. There's updates on a very frequent basis. >> So that's the operating model, really, that you're talking about, essentially, driving business value. We had a practitioner on a couple weeks ago, and he said, "If you just lift and shift to the cloud, "and you don't change your operating model, "you won't get a dime." >> Stu: You're missing the boat. >> Maybe there's something, some value there, a little faster, but you're talking about serious dollars, if you can change the operating model. And that's what you've found? >> Yeah absolutely, and that's the, it's a shift, and you've got to be able to prove it to the business that's there's benefit there, and sometimes that's hard. Some of these cloud concepts and things are a little nebulous, so-- >> It's hard 'cause it's soft. >> It's soft, right, yeah, I mean, you're putting the business case together, the hard stuff is easy to document, but when you're talking about the soft benefits, and you're trying to explain to them the value that they're going to get out of their team switching from a waterfall development over to agile and DevOps, and automated testing and things like that, where I can say, "Hey listen, "you know your team over here that has been, "you know we took them out of the pocket, "from actually doing their day jobs for the last week, "because they needed to test this new version? "If I can take that out of the mix, "and they don't have to do that anymore, "and they can keep on doing what they're doing "and not get a week behind, what value is that for you?" And all of a sudden they're like, "Oh really? "We don't have to do that anymore?" I'm like, "No, we can create test scripts and stuff. "We can automate your deployment. "We can make it zero downtime. "We have," there's an application that we're working on now that has 19,000 individual desktop deployments. And we're going to automate that, turn it into a software as a service application, host it on OpenShift, and completely knock that out. I mean deployments out to 19,000 people take weeks to get done. We only do a couple thousand a week, because there's obviously going to be issues. So now you've got helpdesk tickets, you've got desktop technicians that are going round, trying to fix things, or dialing in, remoting into somebody's desktop to try to help figure that all out. We can do the whole deployment in a day, and everybody logs in the next day, and they've got the new version. That kind of value in creating real cloud-based applications is what's driving the benefit for us. And they're finally starting to really see that. And as we're doing it, more application product owners are going, "Okay, now we're getting some traction. "We heard what you did over here. "Come talk to us, and let's talk "about building a road map and figuring out what we can do." >> John, one of the questions I got from the community after watching you keynote was, they want to understand how you handle security and enforce compliance in this new cloud development model. (laughs) >> That's beyond me, all I can tell you is that we have one of the most secure clouds out there. Our private cloud is beyond secure. We're working right now to try to get the public hybrid cloud space with both AWS and Azure, and working through contracts and stuff right now. But one of the sticking points is our security has to be absolutely top notch, if we're going to do anything that has HIPAA-related data, PHI, PII, PCI, any of that, it has got to be lock-solid secure. And we have a tremendous team led by Robert Booker, he's absolutely fabulous, I mean we're, our whole goal, security-wise, is don't be the next guy on the front page of the Wall Street Journal. >> You mentioned public cloud, how do you make your decisions as to what application, what data can live in which public cloud? You said you've got Azure Stack, and you've got OpenShift. How do you make those platform decisions? >> Well right now, both OpenShift and Azure Stack are on our internal private cloud. So we're in the process of kind of making that shift to move over towards public and hybrid cloud. So I'm working with folks on our team to help develop some of those processes and determine what's actually going to be allowed. And I think in a lot of cases the PHI and protected data is going to stay internal. And we'll be able to take advantage of hosting certain parts of an application on public cloud while keeping other parts of the data really secure and protected behind our private cloud. >> Red Hat made an announcement this morning with AWS, with OpenShift. >> Sounds like that might be of interest to you, would that impact what your doing? >> Absolutely, yeah, in fact I was talking with Jim and Paul back behind the screen this morning. And we were talking about that and I was like wow that is a game changer. With what we're thinking about doing in the hybrid cloud space, having all of the AWS APIs and services and stuff available to us. Part of the objection that I get from some folks now is knowing that we have this move toward public and hybrid cloud internally, and the limitations of our cloud. We're never going to be, our private Optum Cloud is never going to be AWS or Azure, it's just not. I mean they've spent billions of dollars getting those services and stuff in place. Why would we even bother to compete with that? So we do what we do well, and a big portion of that is security. But we want to be able to expand, and take advantage of the things that they have. So that's, this whole announcement of being able to take advantage of those services natively within OpenShift? If we're able to expose that, even internally, on our own private cloud? That's going to take away a lot of the objections, I think, from even our own folks, who are waiting to do the public hybrid cloud piece. >> When the Affordable Care Act hit, did your volume spike? And as things, there's a tug of war now in Washington, it could change again, does that drive changes in your application development in terms of the volume of requests that come in, and compliance things that you have to adhere to? And if so, does having a platform that's more agile, how does that affect your ability to respond? >> Yeah it does, I mean when we first got into the ACA, there was a number of markets that we got into. And there was definitely a ramp-up in development, new things that we had to do on the exchanges. Stuff like that. I mean we even had groups from Optum that were participating directly with the federal government, because some of their exchanges were having issues, and they needed some help from us. So we had a whole team that was kind of embedded with the federal government, helping them out, just based on our experience doing it. And, yeah, having the flexibility, in our own cloud, to be able to able to spin up environments quickly, shut them down, all that, really it's invaluable. >> So the technology business moves so fast, I mean it wasn't that long ago when people saw the first virtualized servers and went Oh my gosh, this is going to change the world. And now it's like, wow we got to do better, and containers. And so you've gone for this amazing transformation, I mean, I think it was 17 developers to 1,600, which is just mind-boggling. Okay, and that's, and you've got technologies that have helped you do that, but five years down the road there's going to be a what's next. So what is next for you? As you break out your telescope, what do you see? >> God, I don't know, I mean I never would have predicted containers. >> Even though they've been around forever, we-- >> Yeah I mean when we first went to VMs, you know back in the day I was a guy in the server room, racking and stacking servers and running cables, and doing all that, so I've seen it go from one extreme to the next. And going from VMs was a huge switch. Building our own private cloud was amazing to be a part of, and now getting into the container side of things, hybrid cloud, I think for us, really, the next big step for us is the hybrid cloud. So we're in the process of getting that, I assume by the end of this year, early next, we'll be a few steps into the hybrid cloud space. And then beyond that, gosh I don't know. >> So that's really extending the operating model into that hybrid cloud notion, bringing that security that you talked about, and that's, you got a lot of work to do. >> John: That's a big task in itself. >> Let's not go too far beyond that, John. Alright well listen, thanks for coming on theCUBE, it was really a pleasure having you. >> Yeah, thanks for having me guys, appreciate it. >> You're welcome, alright keep it right there everybody, Stu and I will be back with our next guest. This is theCUBE, we're live from Red Hat Summit in Boston. We'll be right back. (electronic music)
SUMMARY :
brought to you by Red Hat. and John Hodgson is here, And Optum, you described a little bit at the keynote So in the grand scheme of things within UnitedHealth Group What is that make up, what geographies does that span, of the technology providers to help support and things like OpenShift, etc. Some of the things that we're developing, and where do you see that going? and really the modernization of their applications. So that's the operating model, really, And that's what you've found? and you've got to be able to prove it to the business "If I can take that out of the mix, John, one of the questions I got from the community of the Wall Street Journal. How do you make those platform decisions? and protected data is going to stay internal. with AWS, with OpenShift. and take advantage of the things that they have. So we had a whole team that was kind of embedded So the technology business moves so fast, God, I don't know, I mean I never and now getting into the container side of things, So that's really extending the operating model it was really a pleasure having you. Stu and I will be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
UnitedHealthcare | ORGANIZATION | 0.99+ |
John Hodgson | PERSON | 0.99+ |
UnitedHealth Group | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
UHC | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Robert Booker | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
$1.2 billion | QUANTITY | 0.99+ |
Affordable Care Act | TITLE | 0.99+ |
Health Group | ORGANIZATION | 0.99+ |
17 developers | QUANTITY | 0.99+ |
19,000 people | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
App Store | TITLE | 0.99+ |
Washington | LOCATION | 0.99+ |
1.2 billion | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
1,600 | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
four years ago | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Optum | ORGANIZATION | 0.99+ |
Enterprise Technology Services Group | ORGANIZATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
one | QUANTITY | 0.99+ |
Red Hat Summit 2017 | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
each solution | QUANTITY | 0.98+ |
half a dozen years ago | DATE | 0.98+ |
Azure Stack | TITLE | 0.98+ |
over 10,000 developers | QUANTITY | 0.98+ |
Over 10,000 developers | QUANTITY | 0.97+ |
DevOps | TITLE | 0.97+ |
last week | DATE | 0.97+ |
third iteration | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
ACA | TITLE | 0.96+ |
OpenShift | TITLE | 0.96+ |
HIPAA | TITLE | 0.96+ |
five years | QUANTITY | 0.96+ |
Optum Technology | ORGANIZATION | 0.95+ |
each one | QUANTITY | 0.95+ |
federal government | ORGANIZATION | 0.94+ |
one side | QUANTITY | 0.94+ |
next day | DATE | 0.94+ |
billions of dollars | QUANTITY | 0.94+ |
UCI | ORGANIZATION | 0.93+ |
SSMO | ORGANIZATION | 0.93+ |
this morning | DATE | 0.92+ |
a day | QUANTITY | 0.92+ |
couple weeks ago | DATE | 0.92+ |
Azure | TITLE | 0.91+ |
couple of years ago | DATE | 0.91+ |
Optum Cloud | TITLE | 0.89+ |