Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Alexey Surkov, Deloitte | Amazon re:MARS 2022
(upbeat music) >> Okay, welcome back everyone to theCube's coverage of AWS re:Mars here in Las Vegas. I'm John Furrier, host of theCube. Got Alexey Surkov, Partner at Deloitte joining me today. We're going to talk about AI biased AI trust, trust in the AI for the, to save the planet to save us from the technology. Alexey thanks for coming on. >> Thank you for having me. >> So you had a line before you came on camera that describe the show, and I want you to say it if you don't mind because it was the best line that for me, at least from my generation. >> Alexey: Sure. >> That describes the show and then your role at Deloitte in it. >> Alexey: Sure. Listen, I mean, I, you know, it may sound a little corny, but to me, like I look at this entire show, at this whole building really, and like everybody here is trying to build a better Skynet, you know, better, faster, stronger, more potent, you know, and it's like, we are the only ones, like we're in this corner of like Deloitte trustworthy AI. We're trying to make sure that it doesn't take over the world. So that's, you know, that's the gist of it. How do you make sure that AI serves the good and not evil? How do you make sure that it doesn't have the risk? It doesn't, you know, it's well controlled that it does what we're, what we're asking it to do. >> And of course for all the young folks out there the Terminator is the movie and it's highly referenced in the nerd circles Skynet's evil and helps humanity goes away and lives underground and fights for justice and I think wins at the end. The Terminate three, I don't, I can't remember what happened there, but anyway. >> Alexey: I thought the good guys win, but, you know, that's. >> I think they do win at the end. >> Maybe. >> So that brings up the whole point because what we're seeing here is a lot of futuristic positive messages. I mean, three areas solve a lot of problems in the daily lives. You know, machine learning day to day hard problems. Then you have this new kind of economy emerging, you know, machine learning, driving new economic models, new industrial capabilities. And then you have this whole space save the world vibe, you know, like we discover the moon, new water sources maybe save climate change. So very positive future vibe here at re:Mars. >> Alexey: Absolutely. Yeah, and it was really exciting just watching, you know, watching the speakers talk about the future, and conquering space, and mining on the moon like it's happening already. It's really exciting and amazing. Yeah. >> Let's talk about what you guys are working at Deloitte because I think it's fascinating. You starting to see the digital transformation get to the edge. And when I say edge, I mean back office is done with cloud and you still have the old, you know, stuff that the old models that peoples will use, but now new innovative things are happening. Pushing software out there that's driving you with the FinTech, these verticals, and the trust is a huge factor. Not only do the consumers have a trust issues, who owns my data, there's also trust in the actual algorithms. >> Exactly. >> You guys are in the middle of this. What's your advice to clients, 'cause they want to push the envelope hard be cutting edge, >> Alexey: Right. >> But they don't want to pull back and get caught with their, you know, data out there that might been a misfire or hack. >> Absolutely. Well, I mean the simple truth is that, you know, with great power comes great responsibility, right? So AI brings a lot of promise, but there are a lot of risks, you know. You want to make sure that it's fair, that it's not biased. You want to make sure that it's explainable, that you can figure out and tell others what it's doing. You might want to make sure that it's well controlled, that it's responsible, that it's robust, that, you know, if somebody feeds it bad data, it doesn't produce results that don't make sense. If somebody's trying to provoke it, to do something wrong, that it's robust to those types of interactions. You want to make sure that it preserves privacy. You know, you want to make sure that it's secure, that nobody can hack into it. And so all of those risks are somewhat new. Not all of them are entirely new. As you said, the concept of model risk management has existed for many years. We want to make sure that each black box does what it's supposed to do. Just AI machine learning just raises it to the next level. And we're just trying to keep up with that and make sure that we develop processes, you know, controls that we look at technology that can orchestrate all this de-risking of transition to AI. >> Deloitte's a big firm. You guys saw you in the US open sponsorship was all over the TV. So that you're here at re:Mars show that's all about building up this next infrastructure in space and machine learning, what's the role you have with AWS and this re:Mars. And what's that in context of your overall relationship to the cloud players? >> Alexey: Well, we are, we're one of the largest strategic alliances for AWS, and AWS is one of the largest ones for Deloitte. We do a ton of work with AWS related to cloud, related to AI machine learning, a lot of these new areas. We did a presentation here just the other day on conversational AI, really cutting edge stuff. So we do all of that. So in some ways we participate in that part of the, the part of the room that I mentioned that is trying to kind of push the envelope and get the new technologies out there, but at the same time, Deloitte is a brand that carries a lot of, you know, history of trust, and responsibility, and controls, and compliance, and all of that comes, >> John: You get a lot of clients. I mean, you have big names. Get a lot of big name enterprises >> Right. >> That relied on you. >> Right, and so >> They rely on you now. >> Exactly, yeah. And so, it is natural for us to be in the marketplace, not only with the message of, you know, let's get to the better mouse trap in AI and machine learning, but also let's make sure that it's safe, and secure, and robust, and reliable, and trustworthy at the end of the day. And so, so this trustworthy message is intertwined with everything that we do in AI. We encourage companies to consider trustworthiness from the start. >> Yeah. >> It shouldn't be an afterthought, you know. Like I always say, you know, if you have deployed a bot and it's been deciding whether to issue loans to people, you don't want to find out that it was like, you know, biased against a certain type of (indistinct) >> I can just see in the boardroom, the bot went rogue. >> Right, yeah. >> Through all those loans you know. >> And you don't want to find out about it like six months later, right? That's too late, right? So you want to build in these controls from the beginning, right? You want to make sure that, you know, you are encouraging innovation, you're not stifling any development, and allowing your- >> There's a lot of security challenges too. I mean, it's like, this is the digital transformation sweet spot you're in right now. So I have to ask you, what's the use case, obviously call center's obvious, and bots, and having, you know, self-service capabilities. Where is the customers at right now on psychology and their appetite to push the envelope? And what do you guys see as areas that are most important for your customers to pay attention to? And then where do you guys ultimately deliver the value? >> Sure. Well, our clients are, I think, are aware of the risks of AI. They are not, that's not the first thing that they're thinking about for the most part. So when we come to them with this message they listen, they're very interested. And a lot of them have begun this journey of putting in kind of governance, compliance, controls, to make sure that as they are proceeding down this path of building out AI, that they're doing it responsibly. So it is in a nascent stage. >> John: What defines responsibility? >> Well, you want to, okay, so responsibility is really having governance. Like you have a, you build a robot dog, right? So, but you want to make sure that it has a leash, right? That it doesn't hurt anybody, right? That you have processes in place that at the end of the day, humans are in control, right? I don't want to go back to the Skynet analogy, right? >> John: Yeah. >> But humans should always be in control. There should always be somebody responsible for the functioning of the algorithm that can throw the switch at the right time, that can tweak it at the right time, that can make sure that you nudge it in the right direction that at no point should somebody be able to say, oh, well, it's not my fault. The algorithm did it, and that's why we're in the papers today, right? So that's the piece that's really complex, and what we try to do for our clients as Deloitte always does is kind of demystify that, right? >> John: Yeah. >> So what does it actually mean from a procedures, policies, >> John: Yeah, I mean, I think, >> Tools, technology, people. >> John: Yeah, I mean, this is like the classic operationalizing a new technology, managing it, making sure it doesn't get out of control if you will. >> Alexey: Exactly. >> Stay on the leash if you will. >> Alexey: Exactly. Yeah. And I guess one piece that I always like to mention is that, it's not to put breaks on these new technologies, right? It's not to try to kind of slow people down in developing new things. I actually think that making AI trustworthy is enabling the development of these technologies, right? The way to think about it is that, we have, you know, seat belts, and abs brakes, and, you know, airbags today. And those are all things that didn't exist like 100 years ago, but our cars go a lot faster, and we're a lot safer driving them. So, you know, when people say, oh, I hate seatbelts, you know, you're like, okay, yes, but first of all, there are some safety technologies that you don't even notice, which is how a lot of AI controls work. They blend into the background. And more importantly, the idea is for you to go faster, not slower. And that's what we're trying to enable our clients to do. >> Well, Alexey, great to have you on theCube. We love Deloitte come on to share their expertise. Final question for you is, where do you see this show going? Where do you guys, obviously you here, you're participating, you got a big booth here, where's this going? And what's next, where's the next dots that connect? Share your vision for this show, and kind of how it, or the ecosystem, and this ecosystem, and where you're going to intersect that? >> Wow. I mean, this show is already kind of pushing the boundaries. You know, we're talking about machine learning, artificial intelligence, you know, robotics, space. You know, I guess next thing I think, you know, we'll be probably spending a lot of time in the metaverse, right? So I can see like next time we come here, you know, half of us are wearing VR headsets and walking around and in meta worlds, but, you know, it's been an exciting adventure and, you know I'm really excited to partner and spend, you know spend time with AWS folks, and everybody here because they're really pushing the envelope on the future, and I look forward to next year >> The show is small, so it feels very intimate, which is actually a good feeling. And I think the other thing in metaverse I heard that too. I heard quantum. I said next, I heard, I've heard both those next year quantum and metaverse. >> Okay. >> Well, why not? >> Why not? Exactly, yeah. >> Thanks for coming on theCube. Appreciate it. >> Thank you. >> All right. It's theCube coverage here on the ground. Very casual Cube. Two days of live coverage. It's not as hot and and heavy as re:Invent, but it's a great show bringing all the best smart people together, really figure out the future, you know, solving problems day to day problems, and setting the new economy, the new industrial economy. And of course, a lot of the world problems are going to be helped and solved, very positive message space among other things here at re:Mars. I'm John furrier. Stay with us for more coverage after this short break. (upbeat music)
SUMMARY :
the, to save the planet and I want you to say it That describes the show So that's, you know, in the nerd circles Skynet's evil but, you know, that's. of economy emerging, you know, just watching, you know, and you still have the old, you know, You guys are in the middle of this. with their, you know, that it's robust, that, you know, You guys saw you in carries a lot of, you know, I mean, you have big names. not only with the message of, you know, Like I always say, you know, I can just see in the boardroom, and having, you know, that's not the first thing that at the end of the day, that can make sure that you out of control if you will. the idea is for you to and kind of how it, or the we come here, you know, in metaverse I heard that too. Exactly, yeah. Thanks for coming on theCube. you know, solving problems
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Alexey Surkov | PERSON | 0.99+ |
Alexey | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Two days | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
US | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
six months later | DATE | 0.99+ |
John furrier | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
one piece | QUANTITY | 0.97+ |
three areas | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.96+ |
Mars | LOCATION | 0.95+ |
100 years ago | DATE | 0.94+ |
first thing | QUANTITY | 0.93+ |
each black box | QUANTITY | 0.91+ |
Terminator | TITLE | 0.9+ |
half | QUANTITY | 0.88+ |
Skynet | ORGANIZATION | 0.88+ |
theCube | ORGANIZATION | 0.86+ |
Terminate three | TITLE | 0.85+ |
Invent | TITLE | 0.67+ |
2022 | DATE | 0.65+ |
theCube | COMMERCIAL_ITEM | 0.62+ |
FinTech | ORGANIZATION | 0.62+ |
re | EVENT | 0.6+ |
Mars | ORGANIZATION | 0.59+ |
Skynet | TITLE | 0.57+ |
re:Mars | TITLE | 0.47+ |
Cube | COMMERCIAL_ITEM | 0.46+ |
MARS | EVENT | 0.41+ |
Mars | EVENT | 0.36+ |
Mars | TITLE | 0.32+ |
Empowerment Through Inclusion | Beyond.2020 Digital
>>Yeah, yeah. >>Welcome back. I'm so excited to introduce our next session empowerment through inclusion, reimagining society and technology. This is a topic that's personally very near and dear to my heart. Did you know that there's only 2% of Latinas in technology as a Latina? I know that there's so much more we could do collectively to improve these gaps and diversity. I thought spot diversity is considered a critical element across all levels of the organization. The data shows countless times. A diverse and inclusive workforce ultimately drives innovation better performance and keeps your employees happier. That's why we're passionate about contributing to this conversation and also partnering with organizations that share our mission of improving diversity across our communities. Last beyond, we hosted the session during a breakfast and we packed the whole room. This year, we're bringing the conversation to the forefront to emphasize the importance of diversity and data and share the positive ramifications that it has for your organization. Joining us for this session are thought spots Chief Data Strategy Officer Cindy Housing and Ruhollah Benjamin, associate professor of African American Studies at Princeton University. Thank you, Paola. So many >>of you have journeyed with me for years now on our efforts to improve diversity and inclusion in the data and analytic space. And >>I would say >>over time we cautiously started commiserating, eventually sharing best practices to make ourselves and our companies better. And I do consider it a milestone. Last year, as Paola mentioned that half the room was filled with our male allies. But I remember one of our Panelists, Natalie Longhurst from Vodafone, suggesting that we move it from a side hallway conversation, early morning breakfast to the main stage. And I >>think it was >>Bill Zang from a I G in Japan. Who said Yes, please. Everyone else agreed, but more than a main stage topic, I want to ask you to think about inclusion beyond your role beyond your company toe. How Data and analytics can be used to impact inclusion and equity for the society as a whole. Are we using data to reveal patterns or to perpetuate problems leading Tobias at scale? You are the experts, the change agents, the leaders that can prevent this. I am thrilled to introduce you to the leading authority on this topic, Rou Ha Benjamin, associate professor of African studies at Princeton University and author of Multiple Books. The Latest Race After Technology. Rou ha Welcome. >>Thank you. Thank you so much for having me. I'm thrilled to be in conversation with you today, and I thought I would just kick things off with some opening reflections on this really important session theme. And then we could jump into discussion. So I'd like us to as a starting point, um, wrestle with these buzzwords, empowerment and inclusion so that we can have them be more than kind of big platitudes and really have them reflected in our workplace cultures and the things that we design in the technologies that we put out into the world. And so to do that, I think we have to move beyond techno determinism, and I'll explain what that means in just a minute. Techno determinism comes in two forms. The first, on your left is the idea that technology automation, um, all of these emerging trends are going to harm us, are going to necessarily harm humanity. They're going to take all the jobs they're going to remove human agency. This is what we might call the techno dystopian version of the story and this is what Hollywood loves to sell us in the form of movies like The Matrix or Terminator. The other version on your right is the techno utopian story that technologies automation. The robots as a shorthand, are going to save humanity. They're gonna make everything more efficient, more equitable. And in this case, on the surface, he seemed like opposing narratives right there, telling us different stories. At least they have different endpoints. But when you pull back the screen and look a little bit more closely, you see that they share an underlying logic that technology is in the driver's seat and that human beings that social society can just respond to what's happening. But we don't really have a say in what technologies air designed and so to move beyond techno determinism the notion that technology is in the driver's seat. We have to put the human agents and agencies back into the story, the protagonists, and think carefully about what the human desires worldviews, values, assumptions are that animate the production of technology. And so we have to put the humans behind the screen back into view. And so that's a very first step and when we do that, we see, as was already mentioned, that it's a very homogeneous group right now in terms of who gets the power and the resource is to produce the digital and physical infrastructure that everyone else has to live with. And so, as a first step, we need to think about how to create more participation of those who are working behind the scenes to design technology now to dig a little more a deeper into this, I want to offer a kind of low tech example before we get to the more hi tech ones. So what you see in front of you here is a simple park bench public bench. It's located in Berkeley, California, which is where I went to graduate school and on this particular visit I was living in Boston, and so I was back in California. It was February. It was freezing where I was coming from, and so I wanted to take a few minutes in between meetings to just lay out in the sun and soak in some vitamin D, and I quickly realized, actually, I couldn't lay down on this bench because of the way it had been designed with these arm rests at intermittent intervals. And so here I thought. Okay, the the armrest have, ah functional reason why they're there. I mean, you could literally rest your elbows there or, um, you know, it can create a little bit of privacy of someone sitting there that you don't know. When I was nine months pregnant, it could help me get up and down or for the elderly, the same thing. So it has a lot of functional reasons, but I also thought about the fact that it prevents people who are homeless from sleeping on the bench. And this is the Bay area that we were talking about where, in fact, the tech boom has gone hand in hand with a housing crisis. Those things have grown in tandem. So innovation has grown within equity because we haven't thought carefully about how to address the social context in which technology grows and blossoms. And so I thought, Okay, this crisis is growing in this area, and so perhaps this is a deliberate attempt to make sure that people don't sleep on the benches by the way that they're designed and where the where they're implemented and So this is what we might call structural inequity. By the way something is designed. It has certain effects that exclude or harm different people. And so it may not necessarily be the intense, but that's the effect. And I did a little digging, and I found, in fact, it's a global phenomenon, this thing that architects called hostile architecture. Er, I found single occupancy benches in Helsinki, so only one booty at a time no laying down there. I found caged benches in France. And in this particular town. What's interesting here is that the mayor put these benches out in this little shopping plaza, and within 24 hours the people in the town rallied together and had them removed. So we see here that just because we have, uh, discriminatory design in our public space doesn't mean we have to live with it. We can actually work together to ensure that our public space reflects our better values. But I think my favorite example of all is the meter bench. In this case, this bench is designed with spikes in them, and to get the spikes to retreat into the bench, you have to feed the meter you have to put some coins in, and I think it buys you about 15 or 20 minutes. Then the spikes come back up. And so you'll be happy to know that in this case, this was designed by a German artists to get people to think critically about issues of design, not just the design of physical space but the design of all kinds of things, public policies. And so we can think about how our public life in general is metered, that it serves those that can pay the price and others are excluded or harm, whether we're talking about education or health care. And the meter bench also presents something interesting. For those of us who care about technology, it creates a technical fix for a social problem. In fact, it started out his art. But some municipalities in different parts of the world have actually adopted this in their public spaces in their parks in order to deter so called lawyers from using that space. And so, by a technical fix, we mean something that creates a short term effect, right. It gets people who may want to sleep on it out of sight. They're unable to use it, but it doesn't address the underlying problems that create that need to sleep outside in the first place. And so, in addition to techno determinism, we have to think critically about technical fixes that don't address the underlying issues that technology is meant to solve. And so this is part of a broader issue of discriminatory design, and we can apply the bench metaphor to all kinds of things that we work with or that we create. And the question we really have to continuously ask ourselves is, What values are we building in to the physical and digital infrastructures around us? What are the spikes that we may unwittingly put into place? Or perhaps we didn't create the spikes. Perhaps we started a new job or a new position, and someone hands us something. This is the way things have always been done. So we inherit the spike bench. What is our responsibility when we noticed that it's creating these kinds of harms or exclusions or technical fixes that are bypassing the underlying problem? What is our responsibility? All of this came to a head in the context of financial technologies. I don't know how many of you remember these high profile cases of tech insiders and CEOs who applied for Apple, the Apple card and, in one case, a husband and wife applied and the husband, the husband received a much higher limit almost 20 times the limit as his wife, even though they shared bank accounts, they lived in Common Law State. And so the question. There was not only the fact that the husband was receiving a much better interest rate and the limit, but also that there was no mechanism for the individuals involved to dispute what was happening. They didn't even know what the factors were that they were being judged that was creating this form of discrimination. So in terms of financial technologies, it's not simply the outcome that's the issue. Or that could be discriminatory, but the process that black boxes, all of the decision making that makes it so that consumers and the general public have no way to question it. No way to understand how they're being judged adversely, and so it's the process not only the product that we have to care a lot about. And so the case of the apple cart is part of a much broader phenomenon of, um, racist and sexist robots. This is how the headlines framed it a few years ago, and I was so interested in this framing because there was a first wave of stories that seemed to be shocked at the prospect that technology is not neutral. Then there was a second wave of stories that seemed less surprised. Well, of course, technology inherits its creator's biases. And now I think we've entered a phase of attempts to override and address the default settings of so called racist and sexist robots, for better or worse. And here robots is just a kind of shorthand, that the way people are talking about automation and emerging technologies more broadly. And so as I was encountering these headlines, I was thinking about how these air, not problems simply brought on by machine learning or AI. They're not all brand new, and so I wanted to contribute to the conversation, a kind of larger context and a longer history for us to think carefully about the social dimensions of technology. And so I developed a concept called the New Jim Code, which plays on the phrase Jim Crow, which is the way that the regime of white supremacy and inequality in this country was defined in a previous era, and I wanted us to think about how that legacy continues to haunt the present, how we might be coding bias into emerging technologies and the danger being that we imagine those technologies to be objective. And so this gives us a language to be able to name this phenomenon so that we can address it and change it under this larger umbrella of the new Jim Code are four distinct ways that this phenomenon takes shape from the more obvious engineered inequity. Those were the kinds of inequalities tech mediated inequalities that we can generally see coming. They're kind of obvious. But then we go down the line and we see it becomes harder to detect. It's happening in our own backyards. It's happening around us, and we don't really have a view into the black box, and so it becomes more insidious. And so in the remaining couple minutes, I'm just just going to give you a taste of the last three of these, and then a move towards conclusion that we can start chatting. So when it comes to default discrimination. This is the way that social inequalities become embedded in emerging technologies because designers of these technologies aren't thinking carefully about history and sociology. Ah, great example of this came Thio headlines last fall when it was found that widely used healthcare algorithm affecting millions of patients, um, was discriminating against black patients. And so what's especially important to note here is that this algorithm healthcare algorithm does not explicitly take note of race. That is to say, it is race neutral by using cost to predict healthcare needs. This digital triaging system unwittingly reproduces health disparities because, on average, black people have incurred fewer costs for a variety of reasons, including structural inequality. So in my review of this study by Obermeyer and colleagues, I want to draw attention to how indifference to social reality can be even more harmful than malicious intent. It doesn't have to be the intent of the designers to create this effect, and so we have to look carefully at how indifference is operating and how race neutrality can be a deadly force. When we move on to the next iteration of the new Jim code coded exposure, there's attention because on the one hand, you see this image where the darker skin individual is not being detected by the facial recognition system, right on the camera or on the computer. And so coated exposure names this tension between wanting to be seen and included and recognized, whether it's in facial recognition or in recommendation systems or in tailored advertising. But the opposite of that, the tension is with when you're over included. When you're surveiled when you're to centered. And so we should note that it's not simply in being left out, that's the problem. But it's in being included in harmful ways. And so I want us to think carefully about the rhetoric of inclusion and understand that inclusion is not simply an end point. It's a process, and it is possible to include people in harmful processes. And so we want to ensure that the process is not harmful for it to really be effective. The last iteration of the new Jim Code. That means the the most insidious, let's say, is technologies that are touted as helping US address bias, so they're not simply including people, but they're actively working to address bias. And so in this case, There are a lot of different companies that are using AI to hire, create hiring software and hiring algorithms, including this one higher view. And the idea is that there there's a lot that AI can keep track of that human beings might miss. And so so the software can make data driven talent decisions. After all, the problem of employment discrimination is widespread and well documented. So the logic goes, Wouldn't this be even more reason to outsource decisions to AI? Well, let's think about this carefully. And this is the look of the idea of techno benevolence trying to do good without fully reckoning with what? How technology can reproduce inequalities. So some colleagues of mine at Princeton, um, tested a natural learning processing algorithm and was looking to see whether it exhibited the same, um, tendencies that psychologists have documented among humans. E. And what they found was that in fact, the algorithm associating black names with negative words and white names with pleasant sounding words. And so this particular audit builds on a classic study done around 2003, before all of the emerging technologies were on the scene where two University of Chicago economists sent out thousands of resumes to employers in Boston and Chicago, and all they did was change the names on those resumes. All of the other work history education were the same, and then they waited to see who would get called back. And the applicants, the fictional applicants with white sounding names received 50% more callbacks than the black applicants. So if you're presented with that study, you might be tempted to say, Well, let's let technology handle it since humans are so biased. But my colleagues here in computer science found that this natural language processing algorithm actually reproduced those same associations with black and white names. So, too, with gender coded words and names Amazon learned a couple years ago when its own hiring algorithm was found discriminating against women. Nevertheless, it should be clear by now why technical fixes that claim to bypass human biases are so desirable. If Onley there was a way to slay centuries of racist and sexist demons with a social justice box beyond desirable, more like magical, magical for employers, perhaps looking to streamline the grueling work of recruitment but a curse from any jobseekers, as this headline puts it, your next interview could be with a racist spot, bringing us back to that problem space we started with just a few minutes ago. So it's worth noting that job seekers are already developing ways to subvert the system by trading answers to employers test and creating fake applications as informal audits of their own. In terms of a more collective response, there's a federation of European Trade unions call you and I Global that's developed a charter of digital rights for work, others that touches on automated and a I based decisions to be included in bargaining agreements. And so this is one of many efforts to change their ecosystem to change the context in which technology is being deployed to ensure more protections and more rights for everyday people in the US There's the algorithmic accountability bill that's been presented, and it's one effort to create some more protections around this ubiquity of automated decisions, and I think we should all be calling from more public accountability when it comes to the widespread use of automated decisions. Another development that keeps me somewhat hopeful is that tech workers themselves are increasingly speaking out against the most egregious forms of corporate collusion with state sanctioned racism. And to get a taste of that, I encourage you to check out the hashtag Tech won't build it. Among other statements that they have made and walking out and petitioning their companies. Who one group said, as the people who build the technologies that Microsoft profits from, we refuse to be complicit in terms of education, which is my own ground zero. Um, it's a place where we can we can grow a more historically and socially literate approach to tech design. And this is just one, um, resource that you all can download, Um, by developed by some wonderful colleagues at the Data and Society Research Institute in New York and the goal of this interventionist threefold to develop an intellectual understanding of how structural racism operates and algorithms, social media platforms and technologies, not yet developed and emotional intelligence concerning how to resolve racially stressful situations within organizations, and a commitment to take action to reduce harms to communities of color. And so as a final way to think about why these things are so important, I want to offer a couple last provocations. The first is for us to think a new about what actually is deep learning when it comes to computation. I want to suggest that computational depth when it comes to a I systems without historical or social depth, is actually superficial learning. And so we need to have a much more interdisciplinary, integrated approach to knowledge production and to observing and understanding patterns that don't simply rely on one discipline in order to map reality. The last provocation is this. If, as I suggested at the start, inequity is woven into the very fabric of our society, it's built into the design of our. Our policies are physical infrastructures and now even our digital infrastructures. That means that each twist, coil and code is a chance for us toe. We've new patterns, practices and politics. The vastness of the problems that we're up against will be their undoing. Once we accept that we're pattern makers. So what does that look like? It looks like refusing color blindness as an anecdote to tech media discrimination rather than refusing to see difference. Let's take stock of how the training data and the models that we're creating have these built in decisions from the past that have often been discriminatory. It means actually thinking about the underside of inclusion, which can be targeting. And how do we create a more participatory rather than predatory form of inclusion? And ultimately, it also means owning our own power in these systems so that we can change the patterns of the past. If we're if we inherit a spiked bench, that doesn't mean that we need to continue using it. We can work together to design more just and equitable technologies. So with that, I look forward to our conversation. >>Thank you, Ruth. Ha. That was I expected it to be amazing, as I have been devouring your book in the last few weeks. So I knew that would be impactful. I know we will never think about park benches again. How it's art. And you laid down the gauntlet. Oh, my goodness. That tech won't build it. Well, I would say if the thoughts about team has any saying that we absolutely will build it and will continue toe educate ourselves. So you made a few points that it doesn't matter if it was intentional or not. So unintentional has as big an impact. Um, how do we address that does it just start with awareness building or how do we address that? >>Yeah, so it's important. I mean, it's important. I have good intentions. And so, by saying that intentions are not the end, all be all. It doesn't mean that we're throwing intentions out. But it is saying that there's so many things that happened in the world, happened unwittingly without someone sitting down to to make it good or bad. And so this goes on both ends. The analogy that I often use is if I'm parked outside and I see someone, you know breaking into my car, I don't run out there and say Now, do you feel Do you feel in your heart that you're a thief? Do you intend to be a thief? I don't go and grill their identity or their intention. Thio harm me, but I look at the effect of their actions, and so in terms of art, the teams that we work on, I think one of the things that we can do again is to have a range of perspectives around the table that can think ahead like chess, about how things might play out, but also once we've sort of created something and it's, you know, it's entered into, you know, the world. We need to have, ah, regular audits and check ins to see when it's going off track just because we intended to do good and set it out when it goes sideways, we need mechanisms, formal mechanisms that actually are built into the process that can get it back on track or even remove it entirely if we find And we see that with different products, right that get re called. And so we need that to be formalized rather than putting the burden on the people that are using these things toe have to raise the awareness or have to come to us like with the apple card, Right? To say this thing is not fair. Why don't we have that built into the process to begin with? >>Yeah, so a couple things. So my dad used to say the road to hell is paved with good intentions, so that's >>yes on. In fact, in the book, I say the road to hell is paved with technical fixes. So they're me and your dad are on the same page, >>and I I love your point about bringing different perspectives. And I often say this is why diversity is not just about business benefits. It's your best recipe for for identifying the early biases in the data sets in the way we build things. And yet it's such a thorny problem to address bringing new people in from tech. So in the absence of that, what do we do? Is it the outside review boards? Or do you think regulation is the best bet as you mentioned a >>few? Yeah, yeah, we need really need a combination of things. I mean, we need So on the one hand, we need something like a do no harm, um, ethos. So with that we see in medicine so that it becomes part of the fabric and the culture of organizations that that those values, the social values, have equal or more weight than the other kinds of economic imperatives. Right. So we have toe have a reckoning in house, but we can't leave it to people who are designing and have a vested interest in getting things to market to regulate themselves. We also need independent accountability. So we need a combination of this and going back just to your point about just thinking about like, the diversity on teams. One really cautionary example comes to mind from last fall, when Google's New Pixel four phone was about to come out and it had a kind of facial recognition component to it that you could open the phone and they had been following this research that shows that facial recognition systems don't work as well on darker skin individuals, right? And so they wanted Thio get a head start. They wanted to prevent that, right? So they had good intentions. They didn't want their phone toe block out darker skin, you know, users from from using it. And so what they did was they were trying to diversify their training data so that the system would work better and they hired contract workers, and they told these contract workers to engage black people, tell them to use the phone play with, you know, some kind of app, take a selfie so that their faces would populate that the training set, But they didn't. They did not tell the people what their faces were gonna be used for, so they withheld some information. They didn't tell them. It was being used for the spatial recognition system, and the contract workers went to the media and said Something's not right. Why are we being told? Withhold information? And in fact, they told them, going back to the park bench example. To give people who are homeless $5 gift cards to play with the phone and get their images in this. And so this all came to light and Google withdrew this research and this process because it was so in line with a long history of using marginalized, most vulnerable people and populations to make technologies better when those technologies are likely going toe, harm them in terms of surveillance and other things. And so I think I bring this up here to go back to our question of how the composition of teams might help address this. I think often about who is in that room making that decision about sending, creating this process of the contract workers and who the selfies and so on. Perhaps it was a racially homogeneous group where people didn't want really sensitive to how this could be experienced or seen, but maybe it was a diverse, racially diverse group and perhaps the history of harm when it comes to science and technology. Maybe they didn't have that disciplinary knowledge. And so it could also be a function of what people knew in the room, how they could do that chest in their head and think how this is gonna play out. It's not gonna play out very well. And the last thing is that maybe there was disciplinary diversity. Maybe there was racial ethnic diversity, but maybe the workplace culture made it to those people. Didn't feel like they could speak up right so you could have all the diversity in the world. But if you don't create a context in which people who have those insights feel like they can speak up and be respected and heard, then you're basically sitting on a reservoir of resource is and you're not tapping into it to ensure T to do right by your company. And so it's one of those cautionary tales I think that we can all learn from to try to create an environment where we can elicit those insights from our team and our and our coworkers, >>your point about the culture. This is really inclusion very different from just diversity and thought. Eso I like to end on a hopeful note. A prescriptive note. You have some of the most influential data and analytics leaders and experts attending virtually here. So if you imagine the way we use data and housing is a great example, mortgage lending has not been equitable for African Americans in particular. But if you imagine the right way to use data, what is the future hold when we've gotten better at this? More aware >>of this? Thank you for that question on DSO. You know, there's a few things that come to mind for me one. And I think mortgage environment is really the perfect sort of context in which to think through the the both. The problem where the solutions may lie. One of the most powerful ways I see data being used by different organizations and groups is to shine a light on the past and ongoing inequities. And so oftentimes, when people see the bias, let's say when it came to like the the hiring algorithm or the language out, they see the names associated with negative or positive words that tends toe have, ah, bigger impact because they think well, Wow, The technology is reflecting these biases. It really must be true. Never mind that people might have been raising the issues in other ways before. But I think one of the most powerful ways we can use data and technology is as a mirror onto existing forms of inequality That then can motivate us to try to address those things. The caution is that we cannot just address those once we come to grips with the problem, the solution is not simply going to be a technical solution. And so we have to understand both the promise of data and the limits of data. So when it comes to, let's say, a software program, let's say Ah, hiring algorithm that now is trained toe look for diversity as opposed to homogeneity and say I get hired through one of those algorithms in a new workplace. I can get through the door and be hired. But if nothing else about that workplace has changed and on a day to day basis I'm still experiencing microaggressions. I'm still experiencing all kinds of issues. Then that technology just gave me access to ah harmful environment, you see, and so this is the idea that we can't simply expect the technology to solve all of our problems. We have to do the hard work. And so I would encourage everyone listening to both except the promise of these tools, but really crucially, um, Thio, understand that the rial kinds of changes that we need to make are gonna be messy. They're not gonna be quick fixes. If you think about how long it took our society to create the kinds of inequities that that we now it lived with, we should expect to do our part, do the work and pass the baton. We're not going to magically like Fairy does create a wonderful algorithm that's gonna help us bypass these issues. It can expose them. But then it's up to us to actually do the hard work of changing our social relations are changing the culture of not just our workplaces but our schools. Our healthcare systems are neighborhoods so that they reflect our better values. >>Yeah. Ha. So beautifully said I think all of us are willing to do the hard work. And I like your point about using it is a mirror and thought spot. We like to say a fact driven world is a better world. It can give us that transparency. So on behalf of everyone, thank you so much for your passion for your hard work and for talking to us. >>Thank you, Cindy. Thank you so much for inviting me. Hey, I live back to you. >>Thank you, Cindy and rou ha. For this fascinating exploration of our society and technology, we're just about ready to move on to our final session of the day. So make sure to tune in for this customer case study session with executives from Sienna and Accenture on driving digital transformation with certain AI.
SUMMARY :
I know that there's so much more we could do collectively to improve these gaps and diversity. and inclusion in the data and analytic space. Natalie Longhurst from Vodafone, suggesting that we move it from the change agents, the leaders that can prevent this. And so in the remaining couple minutes, I'm just just going to give you a taste of the last three of these, And you laid down the gauntlet. And so we need that to be formalized rather than putting the burden on So my dad used to say the road to hell is paved with good In fact, in the book, I say the road to hell for identifying the early biases in the data sets in the way we build things. And so this all came to light and the way we use data and housing is a great example, And so we have to understand both the promise And I like your point about using it is a mirror and thought spot. I live back to you. So make sure to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cindy | PERSON | 0.99+ |
Ruth | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Natalie Longhurst | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Paola | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
$5 | QUANTITY | 0.99+ |
Ruhollah Benjamin | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Rou Ha Benjamin | PERSON | 0.99+ |
Bill Zang | PERSON | 0.99+ |
Helsinki | LOCATION | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sienna | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Data and Society Research Institute | ORGANIZATION | 0.99+ |
Cindy Housing | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Obermeyer | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
European Trade unions | ORGANIZATION | 0.99+ |
Berkeley, California | LOCATION | 0.99+ |
Multiple Books | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Tobias | PERSON | 0.99+ |
February | DATE | 0.99+ |
University of Chicago | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
one case | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Jim Crow | PERSON | 0.99+ |
This year | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
I Global | ORGANIZATION | 0.99+ |
two forms | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
2% | QUANTITY | 0.98+ |
Terminator | TITLE | 0.98+ |
Thio | PERSON | 0.98+ |
last fall | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
The Matrix | TITLE | 0.98+ |
24 hours | QUANTITY | 0.98+ |
The Latest Race After Technology | TITLE | 0.98+ |
Jim | PERSON | 0.98+ |
Princeton University | ORGANIZATION | 0.98+ |
Rou ha | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
both ends | QUANTITY | 0.97+ |
Accenture | ORGANIZATION | 0.96+ |
one booty | QUANTITY | 0.96+ |
almost 20 times | QUANTITY | 0.96+ |
Hollywood | ORGANIZATION | 0.95+ |
centuries | QUANTITY | 0.95+ |
rou | PERSON | 0.95+ |
one group | QUANTITY | 0.94+ |
Onley | PERSON | 0.93+ |
about 15 | QUANTITY | 0.93+ |
one discipline | QUANTITY | 0.93+ |
millions of patients | QUANTITY | 0.92+ |
four distinct ways | QUANTITY | 0.92+ |
Pixel four | COMMERCIAL_ITEM | 0.9+ |
few minutes ago | DATE | 0.9+ |
50% more | QUANTITY | 0.88+ |
few years ago | DATE | 0.88+ |
couple years ago | DATE | 0.88+ |
African | OTHER | 0.85+ |
one effort | QUANTITY | 0.85+ |
single occupancy benches | QUANTITY | 0.84+ |
African American | OTHER | 0.82+ |
AIOps Virtual Forum 2020
>>From around the globe. It's the cube with digital coverage of an AI ops virtual forum brought to you by Broadcom. >>Welcome to the AI ops virtual forum. Finally, some Artan extended to be talking with rich lane now, senior analyst, serving infrastructure and operations professionals at Forrester. Rich. It's great to have you today. >>Thank you for having me. I think it's going to be a really fun conversation to have today. >>It is. We're going to be setting the stage for, with Richard, for the it operations challenges and the need for AI ops. That's kind of our objective here in the next 15 minutes. So rich talk to us about some of the problems that enterprise it operations are facing now in this year, that is 2020 that are going to be continuing into the next year. >>Yeah, I mean, I think we've been on this path for a while, but certainly the last eight months has, uh, has accelerated, uh, this problem and, and brought a lot of things to light that, that people were, you know, they were going through the day to day firefighting as their goal way of life. Uh, it's just not sustainable anymore. You a highly distributed environment or in the need for digital services. And, you know, one of them has been building for a while really is in the digital age, you know, we're providing so many, uh, uh, the, the interactions with customers online. Um, we've, we've added these layers of complexity, um, to applications, to infrastructure, you know, or we're in the, in the cloud or a hybrid or multi-cloud, or do you know you name it using cloud native technologies? We're using legacy stuff. We still have mainframe out there. >>Uh, you know, the, just the, the vast amount of things we have to keep track of now and process and look at the data and signals from, it's just, it's a really untenable for, for humans to do that in silos now, uh, in, in, you know, when you add to that, you know, when companies are so heavily invested in gone on the digital transformation path, and it's accelerated so much in the last, uh, year or so that, you know, we're getting so much of our business in revenue derived from these services that they become core to the business. They're not afterthoughts anymore. It's not just about having a website presence. It's, it's about deriving core business value from the services you're providing to your, through your customers. And a lot of cases, customers you're never going to meet or see at that. So it's even more important to be vigilant. >>And on top of the quality of that service that you're giving them. And then when you think about just the staffing issues we have, there's just not enough bodies to go around it in operations anymore. Um, you know, we're not going to be able to hire, you know, like we did 10 years ago, even. Uh, so that's where we need the systems to be able to bring those operational efficiencies to bear. When we say operational efficiencies, we don't mean, you know, uh, lessening head count because we can't do that. That'd be foolish. What we mean is getting the head count. We have back to burping on and higher level things, you know, working on, uh, technology refreshes and project work that that brings better digital services to customers and get them out of doing these sort of, uh, low, uh, complexity, high volume tasks that they're spending at least 20%, if not more on our third day, each day. So I think that the more we can bring intelligence to bear and automation to take those things out of their hands, the better off we are going forward. >>And I'm sure those workers are wanting to be able to have the time to deliver more value, more strategic value to the organization, to their role. And as you're saying, you know, was the demand for digital services is spiking. It's not going to go down and as consumers, if w if we have another option and we're not satisfied, we're going to go somewhere else. So, so it's really about not just surviving this time right now, it's about how do I become a business that's going to thrive going forward and exceeding expectations that are now just growing and growing. So let's talk about AI ops as a facilitator of collaboration, across business folks, it folks developers, operations, how can it facilitate collaboration, which is even more important these days? >>Yeah. So one of the great things about it is now, you know, years ago, have I gone years, as they say, uh, we would buy a tool to fit each situation. And, you know, someone that worked in network and others who will somebody worked in infrastructure from a, you know, Linux standpoint, have their tool, somebody who's from storage would have their tool. And what we found was we would have an incident, a very high impact incident occur. Everybody would get on the phone, 24 people all be looking at their siloed tool, they're siloed pieces of data. And then we'd still have to try to like link point a to B to C together, you know, just to institutional knowledge. And, uh, there was just ended up being a lot of gaps there because we couldn't understand that a certain thing happening over here was related to an advantage over here. >>Um, now when we bring all that data into one umbrella, one data Lake, whatever we want to call it, a lot of smart analytics to that data, uh, and normalize that data in a way we can contextualize it from, you know, point a to point B all the way through the application infrastructure stack. Now, the conversation changes now, the conversation changes to here is the problem, how are we going to fix it? And we're getting there immediately versus three, four or five hours of, uh, you know, hunting and pecking and looking at things and trying to try to extrapolate what we're seeing across disparate systems. Um, and that's really valuable. And in what that does is now we can change the conversation for measuring things. And in server up time and data center, performance metrics as to how are we performing as a business? How are we overall in, in real time, how are businesses being impacted by service disruption? >>We know how much money losing per minute hour, or what have you, uh, and what that translate lights into brand damage and things along those lines, that people are very interested in that. And, you know, what is the effect of making decisions either brief from a product change side? You know, if we're, we're, we're always changing the mobile apps and we're always changing the website, but do we understand what value that brings us or what negative impact that has? We can measure that now and also sales, marketing, um, they run a campaign here's your, you know, coupon for 12% off today only, uh, what does that drive to us with user engagement? We can measure that now in real time, we don't have to wait for those answers anymore. And I think, you know, having all those data and understanding the cause and effect of things increases, it enhances these feedback loops of we're making decisions as a business, as a whole to make, bring better value to our customers. >>You know, how does that tie into ops and dev initiatives? How does everything that we do if I make a change to the underlying architectures that help move the needle forward, does that hinder things, uh, all these things factor into it. In fact, there into the customer experience, which is what we're trying to do at the end of the day, w w whether operations people like it or not, we are all in the customer experience business now. And we have to realize that and work closer than ever with our business and dev partners to make sure we're delivering the highest level of customer experience we can. >>Uh, customer experience is absolutely critical for a number of reasons. I always kind of think it's inextricably linked with employee experience, but let's talk about long-term value because as organizations and every industry has pivoted multiple times this year and will probably continue to do so for the foreseeable future, for them to be able to get immediate value that let's, let's not just stop the bleeding, but let's allow them to get a competitive advantage and be really become resilient. What are some of the, uh, applications that AI ops can deliver with respect to long-term value for an organization? >>Yeah, and I think that it's, you know, you touched upon this a very important point that there is a set of short term goals you want to achieve, but they're really going to be looking towards 12, 18 months down the road. What is it going to have done for you? And I think this helps framing out for you what's most important because it'd be different for every enterprise. Um, and it also shows the ROI of doing this because there is some, you know, change is going to be involved with things you're gonna have to do. But when you look at the, the, the longer time horizon of what it brings to your business as a whole, uh it's to me, at least it all seems, it seems like a no brainer to not do it. Um, you know, thinking about the basic things, like, you know, faster remediation of, of, uh, client impacting incidents, or maybe, maybe even predictive of sort of detection of these incidents that will affect clients. >>So now you're getting, you know, at scale, you know, it's very hard to do when you have hundreds of thousands of optics of the management that relate to each other, but now you're having letting the machines and the intelligence layer find out where that problem is. You know, it's not the red thing, it's the yellow thing. Go look at that. Um, it's reducing the amount of finger pointing and what have you like resolved between teams now, everybody's looking at the same data, the same sort of, uh, symptoms and like, Oh yeah, okay. This is telling us, you know, here's the root cause you should investigate this huge, huge thing. Um, and, and it's something we never thought we'd get to where, uh, this, this is where we smart enough to tell us these things, but this, again, this is the power of having all the data under one umbrella >>And the smart analytics. >>Um, and I think really, you know, it's a boat. Uh, if you look at where infrastructure and operations people are today, and especially, you know, eight months, nine months, whatever it is into the pandemic, uh, a lot of them are getting really burnt out with doing the same repetitive tasks over and over again. Um, just trying to keep the lights on, you know, we need, we need to extract those things for those people, uh, just because it just makes no sense to do something over and over again, the same remediation step, just we should automate those things. So getting that sort of, uh, you know, drudgery off their hands, if you will, and, and get them into, into all their important things they should be doing, you know, they're really hard to solve problems. That's where the human shine, um, and that's where, you know, having a, you know, really high level engineers, that's what they should be doing, you know, and just being able to do things I >>Think in a much faster, >>In a more efficient manner, when you think about an incident occurring, right. In, in a level, one technician picks that up and he goes and triaged that maybe run some tests. He has a script, >>Uh, or she, uh, and, >>You know, uh, they open a ticket and they enrich the ticket. They call it some log files. They can look up for the servers on it. You're in an hour and a half into an incident before anyone's even looked at it. If we could automate all of that, >>Why wouldn't we, that makes it easier for everyone. Um, >>Yeah. And I really think that's where the future is, is, is, is bringing this intelligent automation to bear, to take, knock down all the little things that consume the really, the most amount of time. When you think about it, if you aggregate it over the course of a quarter or a year, a great deal of your time is spent just doing that minutiae again, why don't we automate that? And we should. So I really think that's, that's where you get to look long-term. I think also the sense of we're going to be able to measure everything in the sense of business KPIs versus just IT-centric KPIs. That's really where we going to get to in the digital age. And I think we waited too long to do that. I think our operations models were all voted. I think, uh, you know, a lot of, a lot of the KPIs we look at today are completely outmoded. They don't really change if you think about it. When we look at the monthly reports over the course of a year, uh, so let's do something different. And now having all this data and the smart analytics, we can do something different. Absolutely. I'm glad >>That you brought up kind of looking at the impact that AI ops can make on, on minutiae and burnout. That's a really huge problem that so many of us are facing in any industry. And we know that there's some amount of this that's going to continue for a while longer. So let's get our let's leverage intelligent automation to your point, because we can to be able to allow our people to not just be more efficient, but to be making a bigger impact. And there's that mental component there that I think is absolutely critical. I do want to ask you what are some of these? So for those folks going, all right, we've got to do this. It makes sense. We see some short-term things that we need. We need short-term value. We need long-term value as you've just walked us through. What are some of the obstacles that you'd say, Hey, be on the lookout for this to wipe it out of the way. >>Yeah. I, I think there's, you know, when you think about the obstacles, I think people don't think about what are big changes for their organization, right? You know, they're, they're going to change process. They're going to change the way teams interact. They're they're going to change a lot of things, but they're all for the better. So what we're traditionally really bad in infrastructure and operations is communication, marketing, a new initiative, right? We don't go out and get our peers agreement to it where the product owner is, you know, and say, okay, this is what it gets you. This is where it changes. People just hear I'm losing something, I'm losing control over something. You're going to get rid of the tools that I have, but I love I've spent years building out perfecting, um, and that's threatening to people and understandably so because people think if I start losing tools, I start losing head count. >>And then, whereas my department at that point, um, but that's not what this is all about. Uh, this, this isn't a replacement for people. This isn't a replacement for teams. This isn't augmentation. This is getting them back to doing the things they should be doing and less of the stuff they shouldn't be doing. And frankly, it's, it's about providing better services. So when in the end, it's counterintuitive to be against it because it's gonna make it operations look better. It's gonna make us show us that we are the thought leaders in delivering digital services that we can, um, constantly be perfecting the way we're doing it. And Oh, by the way, we can help the business be better. Also at the same time. Uh, I think some of the mistakes people really don't make, uh, really do make, uh, is not looking at their processes today, trying to figure out what they're gonna look like tomorrow when we bring in advanced automation and intelligence, uh, but also being prepared for what the future state is, you know, in talking to one company, they were like, yeah, we're so excited for this. >>Uh, we, we got rid of our old 15 year old laundering system and the same day we stepped a new system. Uh, one problem we had though, was we weren't ready for the amount of incidents that had generated on day one. And it wasn't because we did anything wrong or the system was wrong or what have you. It did the right thing actually, almost too. Well, what it did is it uncovered a lot of really small incidents through advanced correlations. We didn't know we had, so there were things lying out there that were always like, huh, that's weird. That system acts strange sometimes, but we can never pin it down. We found all of those things, which is good. It goes, but it kind of made us all kind of sit back and think, and then our readership are these guys doing their job. Right? >>And then we had to go through an evolution of, you know, just explaining we were 15 years behind from a visibility standpoint to our environment, but technologies that we deployed in applications had moved ahead and modernized. So this is like a cautionary tale of falling too far behind from a sort of a monitoring and intelligence and automation standpoint. Um, so I thought that was a really good story for something like, think about as Eagle would deploy these modern systems. But I think if he really, you know, the marketing to people, so they're not threatened, I think thinking about your process and then what's, what's your day one and then look like, and then what's your six and 12 months after that looks like, I think settling all that stuff upfront just sets you up for success. >>All right. Rich, take us home here. Let's summarize. How can clients build a business case for AI ops? What do you recommend? >>Yeah. You know, I actually get that question a lot. It's usually, uh, almost always the number one, uh, question in, in, um, you know, webinars like this and conversations that, that the audience puts in. So I wouldn't be surprised, but if that was true, uh, going forward from this one, um, yeah, people are like, you know, Hey, we're all in. We want to do this. We know this is the way forward, but the guy who writes the checks, the CIO, the VP of ops is like, you know, I I've signed lots of checks over the years for tools wise is different. Um, and when I guide people to do is to sit back and, and start doing some hard math, right. Uh, one of the things that resonates with the leadership is dollars and cents. It's not percentages. So saying, you know, it's, it brings us a 63% reduction and MTTR is not going to resonate. >>Uh, Oh, even though it's a really good number, you know, uh, I think what it is, you have to put it in terms of avoid, if we could avoid that 63%. Right. You know, um, what does that mean for our, our digital services as far as revenue, right. We know that every hour system down, I think, uh, you know, typically in the market, you see is about $500,000 an hour for enterprise. We'll add that up over the course of the year. What are you losing in revenue? Add to that brand damage loss of customers, you know, uh, Forrester puts out a really big, uh, casino, um, uh, customer experience index every year that measures that if you're delivering good Udall services, bad digital services, if you could raise that up, what does that return to you in revenue? And that's a key thing. And then you just look at the, the, uh, hours of lost productivity. >>I call it, I might call it something else, but I think it's a catchy name. Meaning if a core internal system is down say, and you know, you have a customer service desk of a thousand customer service people, and they can't do that look up or fix that problem for clients for an hour. How much money does that lose you? And you multiply it out. You know, average customer service desk person makes X amount an hour times this much time. This many times it happens. Then you start seeing the real, sort of a power of AI ops for this incident avoidance, or at least lowering the impact of these incidents. And people have put out in graphs and spreadsheets and all this, and then I'm doing some research around this actually to, to, to put out something that people can use to say, the project funds itself in six to 12 months, it's paid for itself. And then after that it's returning money to the business. Why would you not do that? And when you start framing the conversation, that way, the little light bulb turn on for the people that sign the checks. For sure. >>That's great advice for folks to be thinking about. I loved how you talked about the 63% reduction in something. I think that's great. What does it impact? How does it impact the revenue for the organization? If we're avoiding costs here, how do we drive up revenue? So having that laser focus on revenue is great advice for folks in any industry, looking to build a business case for AI ops. I think you set the stage for that rich beautifully, and you were right. This was a fun conversation. Thank you for your time. Thank you. And thanks for watching >>From around the globe with digital coverage. >>Welcome back to the Broadcom AI ops, virtual forum, Lisa Martin here talking with Eastman Nasir global product management at Verizon. We spent welcome back. >>Hi. Hello. Uh, what a pleasure. >>So 2020 the year of that needs no explanation, right? The year of massive challenges and wanting to get your take on the challenges that organizations are facing this year as the demand to deliver digital products and services has never been higher. >>Yeah. So I think this is something it's so close to all the far far, right? It's, uh, it's something that's impacted the whole world equally. And I think regardless of which industry you rent, you have been impacted by this in one form or the other, and the ICT industry, the information and communication technology industry, you know, Verizon being really massive player in that whole arena. It has just been sort of struck with this massive consummation we have talked about for a long time, we have talked about these remote surgery capabilities whereby you've got patients in Kenya who are being treated by an expert sitting in London or New York, and also this whole consciousness about, you know, our carbon footprint and being environmentally conscious. This pandemic has taught us all of that and brought us to the forefront of organization priorities, right? The demand. I think that's, that's a very natural consequence of everybody sitting at home. >>And the only thing that can keep things still going is this data communication, right? But I wouldn't just say that that is, what's kind of at the heart of all of this. Just imagine if we are to realize any of these targets of the world is what leadership is setting for themselves. Hey, we have to be carbon neutral by X year as a country, as a geography, et cetera, et cetera. You know, all of these things require you to have this remote working capabilities, this remote interaction, not just between humans, but machine to machine interactions. And this there's a unique value chain, which is now getting created that you've got people who are communicating with other people or communicating with other machines, but the communication is much more. I wouldn't even use the term real time because we've used real time for voice and video, et cetera. >>We're talking low latency, microsecond decision-making that can either cut somebody's, you know, um, our trees or that could actually go and remove the tumor, that kind of stuff. So that has become a reality. Everybody's asking for it, remote learning, being an extremely massive requirement where, you know, we've had to enable these, uh, these virtual classrooms ensuring the type of connectivity, ensuring the type of type of privacy, which is just so, so critical. You can't just have everybody in a go on the internet and access a data source. You have to be concerned about the integrity and security of that data as the foremost. So I think all of these things, yes, we have not been caught off guard. We were pretty forward-looking in our plans and our evolution, but yes, it's fast track the journey that we would probably believe we would have taken in three years. It has brought that down to two quarters where we've had to execute them. >>Right. Massive acceleration. All right. So you articulated the challenges really well. And a lot of the realities that many of our viewers are facing. Let's talk now about motivations, AI ops as a tool, as a catalyst for helping organizations overcome those challenges. >>So yeah. Now on that I said, you can imagine, you know, it requires microsecond decision-making which human being on this planet can do microsecond decision-making on complex network infrastructure, which is impacting end user applications, which have multitudes of effect. You know, in real life, I use the example of a remote surgeon. Just imagine that, you know, even because of you just use your signal on the quality of that communication for that microsecond, it could be the difference between killing somebody in saving somebody's life. And it's not predictable. We talk about autonomous vehicles. Uh, we talk about this transition to electric vehicles, smart motorways, et cetera, et cetera, in federal environment, how is all of that going to work? You have so many different components coming in. You don't just have a network and security anymore. You have software defined networking. That's coming, becoming a part of that. >>You have mobile edge computing that is rented for the technologies. 5g enables we're talking augmented reality. We're talking virtual reality. All of these things require that resources and why being carbon conscious. We told them we just want to build a billion data centers on this planet, right? We, we have to make sure that resources are given on demand and the best way of resources can be given on demand and could be most efficient is that the thing is being made at million microsecond and those resources are accordingly being distributed, right? If you're relying on people, sipping their coffees, having teas, talking to somebody else, you know, just being away on holiday. I don't think we're going to be able to handle that one that we have already stepped into. Verizon's 5g has already started businesses on that transformational journey where they're talking about end user experience personalization. >>You're going to have events where people are going to go, and it's going to be three-dimensional experiences that are purely customized for you. How, how does that all happen without this intelligence sitting there and a network with all of these multiple layers? So spectrum, it doesn't just need to be intuitive. Hey, this is my private IP traffic. This is public traffic. You know, it has to not be in two, or this is an application that I have to prioritize over another task to be intuitive to the criticality and the context of those transactions. Again, that's surgeons. So be it's much more important than postman setting and playing a video game. >>I'm glad that you think that that's excellent. Let's go into some specific use cases. What are some of the examples that you gave? Let's kind of dig deeper into some of the, what you think are the lowest hanging fruit for organizations kind of pan industry to go after. >>Excellent. Brian, and I think this, this like different ways to look at the lowest hanging fruit, like for somebody like revising who is a managed services provider, you know, very comprehensive medicines, but we obviously have food timing, much lower potentially for some of our customers who want to go on that journey. Right? So for them to just go and try and harness the power of the foods might be a bit higher hanging, but for somebody like us, the immediate ones would be to reduce the number of alarms that are being generated by these overlay services. You've got your basic network, then you've got your whole software defined networking on top of that, you have your hybrid clouds, you have your edge computing coming on top of that. You know? So all of that means if there's an outage on one device on the network, I want to make this very real for everybody, right? >>It's like device and network does not stop all of those multiple applications or monitoring tools from raising and raising thousands of alarm and everyone, one capacity. If people are attending to those thousands of alarms, it's like you having a police force and there's a burglary in one time and the alarm goes off and 50 bags. How, how are you kind of make the best use of your police force? You're going to go investigate 50 bags or do you want to investigate where the problem is? So it's as real as that, I think that's the first wins where people can save so much cost, which is coming from being wasted and resources running around, trying to figure stuff out immediately. I'm tied this with network and security network and security is something which has you did even the most, you know, I mean single screens in our engineering, well, we took it to have network experts, separate people, security experts, separate people to look for different things, but there are security events that can impact the performance of a network. >>And then just drop the case on the side of et cetera, which could be falsely attributed to the metric. And then if you've got multiple parties, which are then the chapter clear stakeholders, you can imagine the blame game that goes on finding fingers, taking names, not taking responsibility that don't has all this happened. This is the only way to bring it all together to say, okay, this is what takes priority. If there's an event that has happened, what is its correlation to the other downstream systems, devices, components, and these are applications. And then subsequently, you know, like isolating it to the right cost where you can most effectively resolve that problem. Thirdly, I would say on demand, virtualized resource, virtualized resources, the heart and soul, the spirit of status that you can have them on demand. So you can automate the allocation of these resources based on customer's consumption their peaks, their cramps, all of that comes in. >>You see, Hey, typically on a Wednesday, the traffic was up significantly for this particular application, you know, going to this particular data center, you could have this automated system, uh, which is just providing those resources, you know, on demand. And so it is to have a much better commercial engagement with customers and just a much better service assurance model. And then one more thing on top of that, which is very critical is that as I was saying, giving that intelligence to the networks to start having context of the criticality of a transaction, that doesn't make sense to them. You can't have that because for that, you need to have this, you know, monkey their data. You need to have multi-cam system, which are monitoring and controlling different aspects of your overall end user application value chain to be communicating with each other. And, you know, that's the only way to sort of achieve that goal. And that only happens with AI. It's not possible >>So it was when you clearly articulated some obvious, low hanging fruit and use cases that organizations can go after. Let's talk now about some of the considerations, you talked about the importance of a network and AI ops, the approach I assume, needs to be modular support needs to be heterogeneous. Talk to us about some of those key considerations that you would recommend. >>Absolutely. So again, basically starting with the network, because if there's, if the metrics sitting at the middle of all of this is not working, then things can communicate with each other, right? And the cloud doesn't work, nothing metal. That's the hardest part of this, but that's the frequency. When you talk about machine to machine communication or IOT, it's just the biggest transformation of the span of every company is going for IOT now to drive those costs, efficiencies, and had, something's got some experience, the integrity of the topic karma, right? The security, integrity of that. How do you maintain integrity of your data beyond just a secure network components? That is true, right? That's where you're getting to the whole arena blockchain technologies, where you have to use digital signatures or barcodes that machine then, and then an intelligence system is automatically able to validate and verify the integrity of the data and the commands that are being executed by those end-user told them what I need to tell them that. >>So it's IOT machines, right? That is paramount. And if anybody is not keeping that into their equation, that in its own self is any system that is therefore maintaining the integrity of your commands and your hold that sits on those, those machines. Right? Second, you have your network. You need to have any else platform, which is able to restless all the fast network information, et cetera. And coupled with that data integrity piece, because for the management, ultimately they need to have a coherent view of the analytics, et cetera, et cetera. They need to know where the problems are again, right? So let's say if there's a problem with the integrity of the commands that are being executed by the machine, that's a much bigger problem than not being able to communicate with that machine and the best thing, because you'd rather not talk to the machine or have to do anything if it's going to start doing wrong things. >>So I think that's where it is. It's very intuitive. It's not true. You have to have subsequently if you have some kind of faith and let me use that use case self autonomous vehicles. Again, I think we're going to see in the next five years, because he's smart with the rates, et cetera, it won't separate autonomous cars. It's much more efficient, it's much more space, et cetera, et cetera. So within that equation, you're going to have systems which will be specialists in looking at aspects and transactions related to those systems. For example, in autonomous moving vehicles, brakes are much more important than the Vipers, right? So this kind of intelligence, it will be multiple systems who have to sit, N nobody has to, one person has to go in one of these systems. I think these systems should be open source enough that they, if you were able to integrate them, right, if something's sitting in the cloud, you were able to integrate for that with obviously the regard of the security and integrity of your data that has to traverse from one system to the other extremely important. >>So I'm going to borrow that integrity theme for a second, as we go into our last question, and that is this kind of take a macro look at the overall business impact that AI ops can help customers make. I'm thinking of, you know, the integrity of teams aligning business in it, which we probably can't talk about enough. We're helping organizations really effectively measure KPIs that deliver that digital experience that all of us demanding consumers expect. What's the overall impact. What would you say in summary fashion? >>So I think the overall impact is a lot of costs. That's customized and businesses gives the time to the time of enterprises. Defense was inevitable. It's something that for the first time, it will come to life. And it's something that is going to, you know, start driving cost efficiencies and consciousness and awareness within their own business, which is obviously going to have, you know, it domino kind of an effect. So one example being that, you know, you have problem isolation. I talked about network security, this multi-layers architecture, which enables this new world of 5g, um, at the heart of all of it, it has to identify the problem to the source, right? Not be bogged down by 15 different things that are going wrong. What is causing those 15 things to go wrong, right? That speed to isolation in its own sense can make millions and millions of dollars to organizations after we organize it. Next one is obviously overall impacted customer experience. Uh, 5g was given out of your customers, expecting experiences from you, even if you're not expecting to deliver them in 2021, 2022, it would have customers asking for those experience or walking away, if you do not provide those experience. So it's almost like a business can do nothing every year. They don't have to reinvest if they just want to die on the line, businesses want remain relevant. >>Businesses want to adopt the latest and greatest in technology, which enables them to, you know, have that superiority and continue it. So from that perspective that continue it, he will read that they write intelligence systems that tank rationalizing information and making decisions supervised by people, of course were previously making some of those. >>That was a great summary because you're right, you know, with how demanding consumers are. We don't get what we want quickly. We churn, right? We go somewhere else and we could find somebody that can meet those expectations. So it has been thanks for doing a great job of clarifying the impact and the value that AI ops can bring to organizations that sounds really now is we're in this even higher demand for digital products and services, which is not going away. It's probably going to only increase it's table stakes for any organization. Thank you so much for joining me today and giving us your thoughts. >>Pleasure. Thank you. We'll be right back with our next segment. >>Digital applications and services are more critical to a positive customer and employee experience than ever before. But the underlying infrastructure that supports these apps and services has become increasingly complex and expanding use of multiple clouds, mobile and microservices, along with modern and legacy infrastructure can make it difficult to pinpoint the root cause when problems occur, it can be even more difficult to determine the business impact your problems that occur and resolve them efficiently. AI ops from Broadcom can help first by providing 360 degree visibility, whether you have hybrid cloud or a cloud native AI ops from Broadcom provides a clear line of sight, including apt to infrastructure and network visibility across hybrid environments. Second, the solution gives you actionable insights by correlating an aggregating data and applying AI and machine learning to identify root causes and even predict problems before users are impacted. Third AI ops from Broadcom provides intelligent automation that identifies potential solutions when problems occur applied to the best one and learns from the effectiveness to improve response in case the problem occurs. Again, finally, the solution enables organizations to achieve digit with jelly by providing feedback loops across development and operations to allow for continuous improvements and innovation through these four capabilities. AI ops from Broadcom can help you reduce service outages, boost, operational efficiency, and effectiveness and improve customer and employee experience. To learn more about AI ops from Broadcom, go to broadcom.com/ai ops from around the globe. >>It's the cube with digital coverage of AI ops virtual forum brought to you by Broadcom. >>Welcome back to the AI ops virtual forum, Lisa Martin here with Srinivasan, Roger Rajagopal, the head of product and strategy at Broadcom. Raj, welcome here, Lisa. I'm excited for our conversation. So I wanted to dive right into a term that we hear all the time, operational excellence, right? We hear it everywhere in marketing, et cetera, but why is it so important to organizations as they head into 2021? And tell us how AI ops as a platform can help. >>Yeah. Well, thank you. First off. I wanna, uh, I want to welcome our viewers back and, uh, I'm very excited to, uh, to share, um, uh, more info on this topic. You know, uh, here's what we believe as we work with large organizations, we see all our organizations are poised to get out of the, uh, the pandemic and look for a brood for their own business and helping customers get through this tough time. So fiscal year 2021, we believe is going to be a combination of, uh, you know, resiliency and agility at the, at the same time. So operational excellence is critical because the business has become more digital, right? There are going to be three things that are going to be more sticky. Uh, you know, remote work is going to be more sticky, um, cost savings and efficiency is going to be an imperative for organizations and the continued acceleration of digital transformation of enterprises at scale is going to be in reality. So when you put all these three things together as a, as a team that is, uh, you know, that's working behind the scenes to help the businesses succeed, operational excellence is going to be, make or break for organizations, >>Right with that said, if we kind of strip it down to the key capabilities, what are those key capabilities that companies need to be looking for in an AI ops solution? >>Yeah, you know, so first and foremost, AI ops means many things to many, many folks. So let's take a moment to simply define it. The way we define AI ops is it's a system of intelligence, human augmented system that brings together full visibility across app infra and network elements that brings together disparate data sources and provides actionable intelligence and uniquely offers intelligent automation. Now, the, the analogy many folks draw is the self-driving car. I mean, we are in the world of Teslas, uh, but you know, uh, but self-driving data center is it's too far away, right? Autonomous systems are still far away. However, uh, you know, application of AI ML techniques to help deal with volume velocity, veracity of information, uh, is, is critical. So that's how we look at AI ops and some of the key capabilities that we, uh, that we, uh, that we work with our customers to help them on our own for eight years. >>Right? First one is eyes and ears. What we call full stack observability. If you do not know what is happening in your systems, uh, you know, that that serve up your business services. It's going to be pretty hard to do anything, uh, in terms of responsiveness, right? So from stack observability, the second piece is what we call actionable insights. So when you have disparate data sources, tools, sprawls data coming at you from, uh, you know, uh, from a database systems, it systems customer management systems, ticketing systems. How do you find the needle from the haystack? And how do you respond rapidly from a myriad of problems as CEO of red? The third area is what we call intelligent automation. Well, identifying the problem to act on is important, and then acting on automating that and creating, uh, a recommendation system where, uh, you know, you can be proactive about it is even more important. And finally, all of this focuses on efficiency. What about effectiveness? Effectiveness comes when you create a feedback loop, when what happens in production is related to your support systems and your developers so that they can respond rapidly. So we call that continuous feedback. So these are the four key capabilities that, uh, you know, uh, you should look for in an AI ops system. And that's what we offer as well. >>Russia, there's four key capabilities that businesses need to be looking for. I'm wondering how those help to align business. And it it's, again like operational excellence. It's something that we talk about a lot is the alignment of business. And it a lot more challenging, easier said than done, right. But I want you to explain how can AI ops help with that alignment and align it outputs to business outcomes? >>Yeah. So, you know, one of the things, uh, I'm going to say something that is, uh, that is, uh, that is simple, but, but, but this harder, but alignment is not on systems alignment is with people, right? So when people align, when organizations align, when cultures align, uh, dramatic things can happen. So in the context of AI ops VC, when, when SRE is aligned with the DevOps engineers and information architects and, uh, uh, you know, it operators, uh, you know, they enable organizations to reduce the gap between intent and outcome or output and outcome that said, uh, you know, these personas need mechanisms to help them better align, right. Help them better visualize, see the, you know, what we call single source of truth, right? So there are four key things that I want to call out. When we work with large enterprises, we find that customer journey alignment with the, you know, what we call it systems is critical. >>So how do you understand your business imperatives and your customer journey goals, whether it is car to a purchase or whether it is, uh, you know, bill shock scenarios and Swan alignment on customer journey to your it systems is one area that you can reduce the gap. The second area is how do you create a scenario where your teams can find problems before your customers do right outage scenarios and so on. So that's the second area of alignment. The third area of alignment is how can you measure business impact driven services? Right? There are several services that an organization offers versus an it system. Some services are more critical to the business than others, and these change in a dynamic environment. So how do you, how do you understand that? How do you measure that and how, how do you find the gaps there? So that's the third area of alignment that we, that we help and last but not least there are, there are things like NPS scores and others that, that help us understand alignment, but those are more long-term. But in the, in the context of, uh, you know, operating digitally, uh, you want to use customer experience and business, uh, you know, a single business outcome, uh, as a, as a key alignment factor, and then work with your systems of engagement and systems of interaction, along with your key personas to create that alignment. It's a people process technology challenge. >>So, whereas one of the things that you said there is that it's imperative for the business to find a problem before a customer does, and you talked about outages there, that's always a goal for businesses, right. To prevent those outages, how can AI ops help with that? Yeah, >>So, you know, outages, uh, talk, you know, go to resiliency of a system, right? And they also go to, uh, uh, agility of the same system, you know, if you're a customer and if you're whipping up your mobile app and it takes more than three milliseconds, uh, you know, you're probably losing that customer, right. So outages mean different things, you know, and there's an interesting website called down detector.com that actually tracks all the old pages of publicly available services, whether it's your bank or your, uh, you know, tele telecom service or a mobile service and so on and so forth. In fact, the key question around outages for, from, uh, from, uh, you know, executives are the question of, are you ready? Right? Are you ready to respond to the needs of your customers and your business? Are you ready to rapidly resolve an issue that is impacting customer experience and therefore satisfaction? >>Are you creating a digital trust system where customers can be, you know, um, uh, you know, customers can feel that their information is secure when they transact with you, all of these, getting into the notion of resiliency and outages. Now, you know, one of the things that, uh, that I, I often, uh, you know, work with customers around, you know, would that be find as the radius of impact is important when you deal with outages? What I mean by that is problems occur, right? How do you respond? How quickly do you take two seconds, two minutes, 20 minutes, two hours, 20 hours, right? To resolve the problem that radius of impact is important. That's where, you know, you have to bring a gain people, process technology together to solve that. And the key thing is you need a system of intelligence that can aid your teams, you know, look at the same set of parameters so that you can respond faster. That's the key here. >>We look at digital transformation at scale. Raj, how does AI ops help influence that? >>You know, um, I'm going to take a slightly long-winded way to answer this question. See when it comes to digital transformation at scale, the focus on business purpose and business outcome becomes extremely critical. And then the alignment of that to your digital supply chain, right, are the, are the, are the key factors that differentiate winners in the, in their digital transformation game? Really, what we have seen, uh, with, with winners is they operate very differently. Like for example, uh, you know, Nike matures, its digital business outcomes by shoes per second, right? Uh, Apple by I-phones per minute, Tesla by model threes per month, are you getting this, getting it right? I mean, you want to have a clear business outcome, which is a measure of your business, uh, in effect, I mean, ENC, right? Which, which, uh, um, my daughter use and I use very well. >>Right. Uh, you know, uh, they measure by revenue per hour, right? I mean, so these are key measures. And when you have a key business outcome measure like that, you can everything else, because you know what these measures, uh, you know, uh, for a bank, it may be deposits per month, right now, when you move money from checking account to savings account, or when you do direct deposits, those are, you know, banks need liquidity and so on and so forth. But, you know, the, the key thing is that single business outcome has a Starburst effect inside the it organization that touches a single money moment from checking a call to savings account can touch about 75 disparate systems internally. Right? So those think about it, right? I mean, all, all we're doing is moving money from checking account a savings account. Now that goats into a it production system, there are several applications. >>There is a database, there is, there are infrastructures, there are load balancers that are webs. You know, you know, the web server components, which then touches your, your middleware component, which is a queuing system, right. Which then touches your transactional system. Uh, and, uh, you know, which may be on your main frames, what we call mobile to mainframe scenario, right? And we are not done yet. Then you have a security and regulatory compliance system that you have to touch a fraud prevention system that you have to touch, right? A state department regulation that you may have to meet and on and on and on, right? This is the chat that it operations teams face. And when you have millions of customers transacting, right, suddenly this challenge cannot be managed by human beings alone. So therefore you need a system of intelligence that augments human intelligence and acts as your, you know, your, your eyes and ears in a way to, to point pinpoint where problems are. >>Right. So digital transformation at scale really requires a very well thought out AI ops system, a platform, an open extensible platform that, uh, you know, uh, that is heterogeneous in nature because there's tools, products in organizations. There is a lot of databases in systems. There are millions of, uh, uh, you know, customers and hundreds of partners and vendors, you know, making up that digital supply chain. So, you know, AI ops is at the center of an enabling an organization achieve digital op you know, transformation at scale last but not least. You need continuous feedback loop. Continuous feedback loop is the ability for a production system to inform your dev ops teams, your finance teams, your customer experience teams, your cost modeling teams about what is going on so that they can so that they can reduce the intent, come gap. >>All of this need to come together, what we call BizOps. >>That was a great example of how you talked about the Starburst effect. I actually never thought about it in that way, when you give the banking example, but what you should is the magnitude of systems. The fact that people alone really need help with that, and why intelligent automation and AI ops can be transformative and enable that scale. Raj, it's always a pleasure to talk with you. Thanks for joining me today. And we'll be right back with our next segment. Welcome back to the AI ops virtual forum. We've heard from our guests about the value of AI ops and why and how organizations are adopting AI ops platforms. But now let's see AI ops inaction and get a practical view of AI ops to deep Dante. The head of AI ops at Broadcom is now going to take you through a quick demo. >>Hello. So they've gotta head off AI ops and automation here. What I'm going to do today is talk through some of the key capabilities and differentiators of Broadcom's CII ops solution in this solution, which can be delivered on cloud or on-prem. We bring a variety of metric alarm log and applauded data from multiple sources, EPM, NetApps, and infrastructure monitoring tools to provide a single point of observability and control. Let me start where our users mostly stock key enterprises like FSI, telcos retailers, et cetera, do not manage infrastructure or applications without having a business context. At the end of the day, they offer business services governed by SLS service level objectives and SLI service level indicators are service analytics, which can scale to a few thousand services, lets our customers create and monitor the services as per their preference. They can create a hierarchy of services based on their business practice. >>For example, here, the sub services are created based on functional subsistence for certain enterprises. It could be based on location. Users can import these services from their favorite CMDB. What's important to note that not all services are born equal. If you are a modern bank, you may want to prioritize tickets coming from digital banking, for example, and this application lets you rank them as per the KPI of your choice. We can source the availability, not merely from the state of the infrastructure, whether they're running or not. But from the SLS that represent the state of the application, when it comes to triaging issues related to the service, it is important to have a complete view of the topology. The typology can show both east-west elements from mobile to mainframe or not South elements in a network flow. This is particularly relevant for a large enterprise who could be running the systems of engagement on the cloud and system of records on mainframe inside the firewall here, you can see that the issue is related to the mainframe kick server. >>You can expand to see the actual alarm, which is sourced from the mainframe operational intelligence. Similarly, clicking on network will give the hub and spoke view of the network devices, the Cisco switches and routers. I can click on the effected router and see all the details Broadcom's solution stores, the ontological model of the typology in the form of a journal graph where one can not only view the current state of the typology, but the past as well, talking of underlying data sources, the solution uses best of the pre data stores for structured and unstructured data. We have not only leveraged the power of open source, but have actively contributed back to the community. One of the key innovations is evident in our dashboarding framework because we have enhanced the open source Grafana technology to support these diverse data sources here. You can see a single dashboard representing applications to infrastructure, to mainframe again, sourcing a variety of data from these sources. >>When we talk to customers, one of the biggest challenges that they face today is related to alarms because of a proliferation of tools. They are currently drowning in an ocean of hundreds and thousands of alarms. This increases the Elmont support cost to tens of dollars per ticket, and also affects LTO efficiency leading to an average of five to six hours of meantime to resolution here is where we have the state of the art innovation utilizing the power of machine learning and ontology to arrive at the root cause we not only clusterize alarms based on text, but employ the technique of 41st. We look at the topology then at the time window duplicate text based on NLP. And lastly learn from continuous training of the model to deduce what we call situations. This is an example of a situation. As you can see, we provide a time-based evidence of how things unfolded and arrive at a root cause. >>Lastly, the solution provides a three 60 degree closed loop remediation either through a ticketing system or by direct invocation of automation actions instead of firing hard-coded automation runbooks for certain conditions, the tool leverage is machine learning to rank automation actions based on past heuristics. That's why we call it intelligent automation to summarize AI ops from Broadcom helps you achieve operational excellence through full stack observability, coupled with AIML that applies across modern hybrid cloud environments, as well as legacy ones uniquely. It ties these insights with intelligent automation to improve customer experience. Thank you for watching from around the globe. It's the cube with digital coverage of AI ops virtual forum brought to you by Broadcom. >>Welcome to our final segment today. So we've discussed today. The value that AI ops will bring to organizations in 2021, we'll discuss that through three different perspectives. And so now we want to bring those perspectives together and see if we can get a consensus on where AI ops needs to go for folks to be successful with it in the future. So bringing back some folks Richland is back with us. Senior analysts, serving infrastructure and operations professionals at Forrester smartness here is also back in global product management at Verizon and Srinivasan, Reggie Gopaul head of product and strategy at Broadcom guys. Great to have you back. So let's jump in and rich, we're going to, we're going to start with you, but we are going to get all three of you, a chance to answer the questions. So we've talked about why organizations should adopt AI ops, but what happens if they choose not to what challenges would they face? Basically what's the cost of organizations doing nothing >>Good question, because I think in operations for a number of years, we've kind of stand stood, Pat, where we are, where we're afraid change things sometimes, or we just don't think about a tooling as often. The last thing to change because we're spending so much time doing project work and modernization and fighting fires on a daily basis. >>Problem is going to get worse. If we do nothing, >>You know, we're building new architectures like containers and microservices, which means more things to mind and keep running. Um, we're building highly distributed systems. We're moving more and more into this hybrid world, a multi-cloud world, uh, it's become over-complicate and I'll give a short anecdote. I think, eliminate this. Um, when I go to conferences and give speeches, it's all infrastructure operations people. And I say, you know, how many people have three X, five X, you know, uh, things to monitor them. They had, you know, three years ago, two years ago, and everyone's saying how many people have hired more staff in that time period, zero hands go up. That's the gap we have to fill. And we have to fill that through better automation, more intelligent systems. It's the only way we're going to be able to fill back out. >>What's your perspective, uh, if organizations choose not to adopt AI ops. Yeah. So I'll do that. Yeah. So I think it's, I would just relate it to a couple of things that probably everybody >>Tired off lately and everybody can relate to. And this would resonate that we have 5g, which is all set to transform the world. As we know it, I don't have a lot of communication with these smart cities, smart communities, IOT, which is going to make us pivotal to the success of businesses. And as you've seen with this call with, you know, transformation of the world, that there's a, there's a much bigger cost consciousness out there. People are trying to become much more, forward-looking much more sustainable. And I think at the heart of all of this, that the necessity that you have intelligent systems, which are bastardizing more than enough information that previously could've been overlooked because if you don't measure engagement, not going right. People not being on the same page of this using two examples or hundreds of things, you know, that play a part in things, but not coming together in the best possible way. So I think it has an absolute necessity to drive those cost efficiencies rather than, you know, left right and center laying off people who are like 10 Mattel to your business and have a great tribal knowledge of your business. So to speak, you can drive these efficiencies through automating a lot of those tasks that previously were being very manually intensive or resource intensive. And you could allocate those resources towards doing much better things, which let's be very honest going into 20, 21 after what we've seen with 2020, it's going to be mandate treat. >>And so Raj, I saw you shaking your head there when he was mom was sharing his thoughts. What are your thoughts about that sounds like you agree. Yeah. I mean, uh, you know, uh, to put things in perspective, right? I mean we're firmly in the digital economy, right? Digital economy, according to the Bureau of economic analysis is 9% of the U S GDP. Just, you know, think about it in, in, in, in, in the context of the GDP, right? It's only ranked lower, slightly lower than manufacturing, which is at 11.3% GDP and slightly about finance and insurance, which is about seven and a half percent GDP. So the digital economy is firmly in our lives, right. And as Huisman was talking about it, you know, software eats the world and digital, operational excellence is critical for customers, uh, to, uh, you know, to, uh, to drive profitability and growth, uh, in the digital economy. >>It's almost, you know, the key is digital at scale. So when, uh, when rich talks about some of the challenges and when Huseman highlights 5g as an example, those are the things that, that, that come to mind. So to me, what is the cost or perils of doing nothing? You know, uh, it's not an option. I think, you know, more often than not, uh, you know, C-level execs are asking head of it and they are key influencers, a single question, are you ready? Are you ready in the context of addressing spikes in networks because of the pandemic scenario, are you ready in the context of automating away toil? Are you ready to respond rapidly to the needs of the digital business? I think AI ops is critical. >>That's a great point. Roger, where does stick with you? So we got kind of consensus there, as you said, wrapping it up. This is basically a, not an option. This is a must to go forward for organizations to be successful. So let's talk about some quick wins, or as you talked about, you know, organizations and sea levels asking, are you ready? What are some quick wins that that organizations can achieve when they're adopting AI? >>You know, um, immediate value. I think I would start with a question. How often do your customers find problems in your digital experience before you do think about that? Right. You know, if you, if you, you know, there's an interesting web, uh, website, um, uh, you know, down detector.com, right? I think, uh, in, in Europe there is an equal amount of that as well. It ha you know, people post their digital services that are down, whether it's a bank that, uh, you know, customers are trying to move money from checking account, the savings account and the digital services are down and so on and so forth. So some and many times customers tend to find problems before it operations teams do. So a quick win is to be proactive and immediate value is visibility. If you do not know what is happening in your complex systems that make up your digital supply chain, it's going to be hard to be responsive. So I would start there >>Visibility this same question over to you from Verizon's perspective, quick wins. >>Yeah. So I think first of all, there's a need to ingest this multi-care spectrum data, which I don't think is humanly possible. You don't have people having expertise, you know, all the seven layers of the OSI model and then across network and security and at the application level. So I think you need systems which are now able to get that data. It shouldn't just be wasted reports that you're paying for on a monthly basis. It's about time that you started making the most of those in the form of identifying what are the efficiencies within your ecosystem. First of all, what are the things, you know, which could be better utilized subsequently you have the >>Opportunity to reduce the noise of a trouble tickets handling. It sounds pretty trivial, but >>An average you can imagine every trouble tickets has the cost in dollars, right? >>So, and there's so many tickets and there's art >>That get created on a network and across an end user application value, >>We're talking thousands, you know, across and end user >>Application value chain could be million in >>A year. So, and so many of those are not really, >>He, you know, a cause of concern because the problem is something. >>So I think that whole triage is an immediate cost saving and the bigger your network, the bigger >>There's a cost of things, whether you're a provider, whether you're, you know, the end customer at the end of the day, not having to deal with problems, which nobody can resolve, which are not meant to be dealt with. There's so many of those situations, right, where service has just been adopted, >>Which is just coordinate quality, et cetera, et cetera. So many reasons. So those are the, >>So there's some of the immediate cost saving them. They are really, really significant. >>Secondly, I would say Raj mentioned something about, you know, the user, >>Your application value chain, and an understanding of that, especially with this hybrid cloud environment, >>Et cetera, et cetera, right? The time it takes to identify a problem in an end user application value chain across the seven layers that I mentioned with the OSI reference model across network and security and the application environment. It's something that >>In its own self has massive cost to business, >>Right? That could be >>No sale transactions that could be obstructed because of this. There could be, and I'm going to use a really interesting example. >>We talk about IOT. The integrity of the IOT machine is exciting. >>Family is pivotal in this new world that we're stepping into. >>You could be running commands, >>Super efficient. He has, everything is being told to the machine really fast with sending yeah. >>Everything there. What if it's hacked? And if that's okay, >>Robotic arm starts to involve the things you don't want it to do. >>So there's so much of that. That becomes a part of this naturally. And I believe, yes, this is not just like from a cost >>standpoint, but anything going wrong with that code base, et cetera, et cetera. These are massive costs to the business in the form of the revenue. They have lost the perception in the market as a result, the fed, >>You know, all that stuff. So >>These are a couple of very immediate problems, but then you also have the whole player virtualized resources where you can automate the allocation, you know, the quantification of an orchestration of those virtualized resources, rather than a person having to, you know, see something and then say, Oh yeah, I need to increase capacity over here, because then it's going to have this particular application. You have systems doing this stuff and to, you know, Roger's point your customer should not be identifying your problems before you, because this digital is where it's all about perception. >>Absolutely. We definitely don't want the customers finding it before. So rich, let's wrap this particular question up with you from that senior analyst perspective, how can companies use make big impact quickly with AI ops? Yeah, >>Yeah, I think, you know, and it was been really summed up some really great use cases there. I think with the, uh, you know, one of the biggest struggles we've always had in operations is isn't, you know, the mean time to resolve. We're pretty good at resolving the things. We just have to find the thing we have to resolve. That's always been the problem and using these advanced analytics and machine learning algorithms now across all machine and application data, our tendency is humans is to look at the console and say, what's flashing red. That must be what we have to fix, but it could be something that's yellow, somewhere else, six services away. And we have made things so complicated. And I think this is what it was when I was saying that we can't get there anymore on our own. We need help to get there in all of this stuff that the outline. >>So, so well builds up to a higher level thing of what is the customer experience about what is the customer journey? And we've struggled for years in the digital world and measuring that a day-to-day thing. We know an online retail. If you're having a bad experience at one retailer, you just want your thing. You're going to go to another retailer, brand loyalty. Isn't one of like it, wasn't a brick and mortal world where you had a department store near you. So you were loyal to that because it was in your neighborhood, um, online that doesn't exist anymore. So we need to be able to understand the customer from that first moment, they touch a digital service all the way from their, their journey through that digital service, the lowest layer, whether it be a database or the network, what have you, and then back to them again, and we're not understanding, is that a good experience? >>We gave them. How does that compare to last week's experience? What should we be doing to improve that next week? Uh, and I think companies are starting and then the pandemic certainly, you know, push this timeline. If you listened to the, the, the CEO of Microsoft, he's like, you know, 10 years of digital transformation written down. And the first several months of this, um, in banks and in financial institutions, I talked to insurance companies, aren't slowing down. They're trying to speed up. In fact, what they've discovered is that they're, you know, obviously when we were on lockdown or what have you, they use of digital servers is spiked very high. What they've learned is they're never going to go back down. They're never going to return to pretend endemic levels. So now they're stuck with this new reality. Well, how do we service those customers and how do we make sure we keep them loyal to our brand? >>Uh, so, you know, they're looking for modernization opportunities. A lot of that that's things have been exposed. And I think Raj touched upon this very early in the conversation is visibility gaps. Now that we're on the outside, looking in at the data center, we know we architect things in a very way. Uh, we better ways of making these correlations across the Sparrow technologies to understand where the problems lies. We can give better services to our customers. And I think that's really what we're going to see a lot of the innovation and the people really clamoring for these new ways of doing things that starting, you know, now, I mean, I've seen it in customers, but I think really the push through the end of this year to next year when, you know, economy and things like that straightened out a little bit more, I think it really, people are gonna take a hard look of where they are and is, you know, AI ops the way forward for them. And I think they'll find it. The answer is yes, for sure. >>So we've, we've come to a consensus that, of what the parallels are of organizations, basically the cost of doing nothing. You guys have given some great advice on where some of those quick wins are. Let's talk about something Raj touched on earlier is organizations, are they really ready for truly automated AI? Raj, I want to start with you readiness factor. What are your thoughts? >>Uh, you know, uh, I think so, you know, we place our, her lives on automated systems all the time, right? In our, in our day-to-day lives, in the, in the digital world. I think, uh, you know, our, uh, at least the customers that I talk to our customers are, uh, are, uh, you know, uh, have a sophisticated systems. Like for example, advanced automation is a reality. If you look at social media, AI and ML and automation are used to automate away, uh, misinformation, right? If you look at financial institutions, AI and ML are used to automate away a fraud, right? So I want to ask our customers why can't we automate await oil in it, operation systems, right? And that's where our customers are. Then the, you know, uh, I'm a glass half full, uh, cleanup person, right? Uh, this pandemic has been harder on many of our customers, but I think what we have learned from our customers is they've Rose to the occasion. >>They've used digital as a key needs, right? At scale. That's what we see with, you know, when, when Huseman and his team talk about, uh, you know, network operational intelligence, right. That's what it means to us. So I think they are ready, the intersection of customer experience it and OT, operational technology is ripe for automation. Uh, and, uh, you know, I, I wanna, I wanna sort of give a shout out to three key personas in this mix. It's about people, right? One is the SRE persona, you know, site, reliability engineer. The other is the information security persona. And the third one is the it operator automation engineer persona. These folks in organizations are building a system of intelligence that can respond rapidly to the needs of their digital business. We at Broadcom, we are in the business of helping them construct a system of intelligence that will create a human augmented solution for them. Right. So when I see, when I interact with large enterprise customers, I think they, they, you know, they, they want to achieve what I would call advanced automation and AI ML solutions. And that's squarely, very I ops is, you know, is going as it, you know, when I talk to rich and what, everything that rich says, you know, that's where it's going and that's what we want to help our customers to. So, which about your perspective of organizations being ready for truly automated AI? >>I think, you know, the conversation has shifted a lot in the last, in, in pre pandemic. Uh, I'd say at the end of last year, we're, you know, two years ago, people I'd go to conferences and people come up and ask me like, this is all smoke and mirrors, right? These systems can't do this because it is such a leap forward for them, for where they are today. Right. We we've sort of, you know, in software and other systems, we iterate and we move forward slowly. So it's not a big shock. And this is for a lot of organizations that big, big leap forward where they're, they're running their operations teams today. Um, but now they've come around and say, you know what? We want to do this. We want all the automations. We want my staff not doing the low complexity, repetitive tasks over and over again. >>Um, you know, and we have a lot of those kinds of legacy systems. We're not going to rebuild. Um, but they need certain care and feeding. So why are we having operations? People do those tasks? Why aren't we automating those out? I think the other piece is, and I'll, I'll, I'll send this out to any of the operations teams that are thinking about going down this path is that you have to understand that the operations models that we're operating under in, in INO and have been for the last 25 years are super outdated and they're fundamentally broken for the digital age. We have to start thinking about different ways of doing things and how do we do that? Well, it's, it's people, organization, people are going to work together differently in an AI ops world, um, for the better. Um, but you know, there's going to be the, the age of the 40 person bridge call thing. >>Troubleshooting is going away. It's going to be three, four, five focused engineers that need to be there for that particular incident. Um, a lot of process mailer process we have in our level, one level, two engineering. What have you running of tickets, gathering of artifacts, uh, during an incident is going to be automated. That's a good thing. We should be doing those, those things by hand anymore. So I'd say that the, to people's like start thinking about what this means to your organization. Start thinking about the great things we can do by automating things away from people, having to do them over and over again. And what that means for them, getting them matched to what they want to be doing is high level engineering tasks. They want to be doing monitorization, working with new tools and technologies. Um, these are all good things that help the organization perform better as a whole great advice and great kind of some of the thoughts that you shared rich for what the audience needs to be on the lookout. For one, I want to go over to you, give me your thoughts on what the audience that should be on the lookout for, or put on your agendas in the next 12 months. >>So there's like a couple of ways to answer that question. One thing would be in the form of, you know, what are some of the things they have to be concerned about in terms of implementing this solution or harnessing its power. The other one could be, you know, what are the perhaps advantages they should look to see? So if I was to talk about the first one, let's say that, what are some of the things I have to watch out for like possible pitfalls that everybody has data, right? So yeah, there's one strategy we say, okay, you've got the data, let's see what we can do with them. But then there's the exact opposite side, which has to be considered when you're doing that analysis. What are the use cases that you're looking to drive? Right. But then use cases you have to understand, are you taking a reactive use case approach? >>Are you taking active use cases, right? Or, yeah, that's a very, very important concentration. Then you have to be very cognizant of where does this data that you have, where does it reside? What are the systems and where does it need to go to in order for this AI function to happen and subsequently if there needs to be any backward communication with all of that data in a process manner. So I think these are some of the very critical points because you can have an AI solution, which is sitting in a customer data center. It could be in a managed services provider data center, like, right, right. It could be in a cloud data center, like an AWS or something, or you could have hybrid views, et cetera, all of that stuff. So you have to be very mindful of where you're going to get the data from is going to go to what are the use cases you're trying to get out to do a bit of backward forward. >>Okay, we've got this data thing and I think it's a journey. Nobody can come in and say, Hey, you've built this fantastic thing. It's like Terminator two. I think it's a journey where we built starting with the network. My personal focus always comes down to the network and with 5g so much, so much more right with 5g, you're talking low latency communication. That's like the true power of 5g, right? It's low latency, it's ultra high bandwidth, but what's the point of that low latency. If then subsequently the actions that need to be taken to prevent any problems in application, IOT applications, remote surgeries, uh, self driving vehicles, et cetera, et cetera. What if that's where people are sitting and sipping their coffees and trying to take action that needs to be in low latency as well. Right? So these are, I think some of the fundamental things that you have to know your data, your use cases, that location, where it needs to be exchanged, what are the parameters around that for extending that data? >>And I think from that point at one word, it's all about realizing, you know, sense of business outcomes. Unless AI comes in as a digital labor that shows you, I have, I have reduced your this amount of time and that's a result of big problems or identified problems for anything. Or I have saved you this much resource in a month, in a year or whatever timeline that people want to see it. So I think those are some of the initial starting points, and then it all starts coming together. But the key is it's not one system that can do everything. You have to have a way where, you know, you can share data once you've caught all of that data into one system. Maybe you can send it to another system at make more, take more advantage, right? That system might be an AI and IOT system, which is just looking at all of your street and make it sure that Hey parents. So it's still off just to be more carbon neutral and all that great stuff, et cetera, et cetera, >>Stuff for the audience to can cigarette rush, take us time from here. What are some of the takeaways that you think the audience really needs to be laser focused on as we move forward into the next year? You know, one thing that, uh, I think a key takeaway is, um, uh, you know, as we embark on 2021, closing the gap between intent and outcome and outputs and outcome will become critical, is critical. Uh, you know, especially for, uh, you know, uh, digital transformation at scale for organizations context in the, you know, for customer experience becomes even more critical as who Swan Huseman was talking, uh, you know, being network network aware network availability is, is a necessary condition, but not sufficient condition anymore. Right? The what, what, what customers have to go towards is going from network availability to network agility with high security, uh, what we call app aware networks, right? How do you differentiate between a trade, a million dollar trade that's happening between, uh, you know, London and New York, uh, uh, versus a YouTube video training that an employee is going through? Worse is a YouTube video that millions of customers are, are >>Watching, right? Three different context, three different customer scenarios, right? That is going to be critical. And last but not least feedback loop, uh, you know, responsiveness is all about feedback loop. You cannot predict everything, but you can respond to things faster. I think these are sort of the three, three things that, uh, that, uh, you know, customers aren't going to have to have to really think about. And that's also where I believe AI ops, by the way, AI ops and I I'm. Yeah. You know, one of the points that was smart and shout out to what he was saying was heterogeneity is key, right? There is no homogeneous tool in the world that can solve problems. So you want an open extensible system of intelligence that, that can harness data from disparate data sources provide that visualization, the actionable insight and the human augmented recommendation systems that are so needed for, uh, you know, it operators to be successful. I think that's where it's going. >>Amazing. You guys just provided so much content context recommendations for the audience. I think we accomplished our goal on this. I'll call it power panel of not only getting to a consensus of what, where AI ops needs to go in the future, but great recommendations for what businesses in any industry need to be on the lookout for rich Huisman Raj, thank you for joining me today. We want to thank you for watching. This was such a rich session. You probably want to watch it again. Thanks for your time. Thanks so much for attending and participating in the AI OBS virtual forum. We really appreciate your time and we hope you really clearly understand the value that AI ops platforms can deliver to many types of organizations. I'm Lisa Martin, and I want to thank our speakers today for joining. We have rich lane from Forrester who's fund here from Verizon and Raj from Broadcom. Thanks everyone. Stay safe..
SUMMARY :
ops virtual forum brought to you by Broadcom. It's great to have you today. I think it's going to be a really fun conversation to have today. that is 2020 that are going to be continuing into the next year. to infrastructure, you know, or we're in the, in the cloud or a hybrid or multi-cloud, in silos now, uh, in, in, you know, when you add to that, we don't mean, you know, uh, lessening head count because we can't do that. It's not going to go down and as consumers, you know, just to institutional knowledge. four or five hours of, uh, you know, hunting and pecking and looking at things and trying to try And I think, you know, having all those data and understanding the cause and effect of things increases, if I make a change to the underlying architectures that help move the needle forward, continue to do so for the foreseeable future, for them to be able and it also shows the ROI of doing this because there is some, you know, you know, here's the root cause you should investigate this huge, huge thing. So getting that sort of, uh, you know, In a more efficient manner, when you think about an incident occurring, You know, uh, they open a ticket and they enrich the ticket. Um, I think, uh, you know, a lot of, a lot of I do want to ask you what are some of these? it where the product owner is, you know, and say, okay, this is what it gets you. you know, in talking to one company, they were like, yeah, we're so excited for this. And it wasn't because we did anything wrong or the system And then we had to go through an evolution of, you know, just explaining we were 15 What do you recommend? the CIO, the VP of ops is like, you know, I I've signed lots of checks over We know that every hour system down, I think, uh, you know, is down say, and you know, you have a customer service desk of a thousand customer I think you set the stage for that rich beautifully, and you were right. Welcome back to the Broadcom AI ops, virtual forum, Lisa Martin here talking with Eastman Nasir Uh, what a pleasure. So 2020 the year of that needs no explanation, right? or New York, and also this whole consciousness about, you know, You know, all of these things require you to have this you know, we've had to enable these, uh, these virtual classrooms ensuring So you articulated the challenges really well. you know, even because of you just use your signal on the quality talking to somebody else, you know, just being away on holiday. So spectrum, it doesn't just need to be intuitive. What are some of the examples that you gave? fruit, like for somebody like revising who is a managed services provider, you know, You're going to go investigate 50 bags or do you want to investigate where And then subsequently, you know, like isolating it to the right cost uh, which is just providing those resources, you know, on demand. So it was when you clearly articulated some obvious, low hanging fruit and use cases that How do you maintain integrity of your you have your network. right, if something's sitting in the cloud, you were able to integrate for that with obviously the I'm thinking of, you know, the integrity of teams aligning business in it, which we probably can't talk So one example being that, you know, you know, have that superiority and continue it. Thank you so much for joining me today and giving us We'll be right back with our next segment. the solution gives you actionable insights by correlating an aggregating data and applying AI brought to you by Broadcom. Welcome back to the AI ops virtual forum, Lisa Martin here with Srinivasan, as a, as a team that is, uh, you know, that's working behind the scenes However, uh, you know, application of AI ML uh, you know, that that serve up your business services. But I want you to explain how can AI ops help with that alignment and align it outcome that said, uh, you know, these personas need mechanisms But in the, in the context of, uh, you know, So, whereas one of the things that you said there is that it's imperative for the business to find a problem before of the same system, you know, if you're a customer and if you're whipping up your mobile app I often, uh, you know, work with customers around, you know, We look at digital transformation at scale. uh, you know, Nike matures, its digital business outcomes by shoes per second, these measures, uh, you know, uh, for a bank, it may be deposits per month, Uh, and, uh, you know, which may be on your main frames, what we call mobile to mainframe scenario, There are millions of, uh, uh, you know, customers and hundreds The head of AI ops at Broadcom is now going to take you through a quick demo. I'm going to do today is talk through some of the key capabilities and differentiators of here, you can see that the issue is related to the mainframe kick server. You can expand to see the actual alarm, which is sourced from the mainframe operational intelligence. This increases the Elmont support cost to tens of dollars per virtual forum brought to you by Broadcom. Great to have you back. The last thing to change because we're spending so much time doing project work and modernization and fighting Problem is going to get worse. And I say, you know, how many people have three X, five X, you know, uh, things to monitor them. So I think it's, I would just relate it to a couple of things So to speak, you can drive these efficiencies through automating a lot of I mean, uh, you know, uh, to put things in perspective, I think, you know, more often than not, uh, you know, So we got kind of consensus there, as you said, uh, website, um, uh, you know, down detector.com, First of all, what are the things, you know, which could be better utilized Opportunity to reduce the noise of a trouble tickets handling. So, and so many of those are not really, not having to deal with problems, which nobody can resolve, which are not meant to be dealt with. So those are the, So there's some of the immediate cost saving them. the seven layers that I mentioned with the OSI reference model across network and security and I'm going to use a really interesting example. The integrity of the IOT machine is He has, everything is being told to the machine really fast with sending yeah. And if that's okay, And I believe, to the business in the form of the revenue. You know, all that stuff. to, you know, Roger's point your customer should not be identifying your problems before up with you from that senior analyst perspective, how can companies use I think with the, uh, you know, one of the biggest struggles we've always had in operations is isn't, So you were loyal to that because it was in your neighborhood, um, online that doesn't exist anymore. Uh, and I think companies are starting and then the pandemic certainly, you know, and is, you know, AI ops the way forward for them. Raj, I want to start with you readiness factor. I think, uh, you know, our, And that's squarely, very I ops is, you know, is going as it, Uh, I'd say at the end of last year, we're, you know, two years ago, people I'd and I'll, I'll, I'll send this out to any of the operations teams that are thinking about going down this path is that you have to understand So I'd say that the, to people's like start thinking about what this means One thing would be in the form of, you know, what are some of the things they have to be concerned So I think these are some of the very critical points because you can have an AI solution, you have to know your data, your use cases, that location, where it needs to be exchanged, You have to have a way where, you know, you can share data once you've uh, you know, uh, digital transformation at scale for organizations context recommendation systems that are so needed for, uh, you know, and we hope you really clearly understand the value that AI ops platforms can deliver to many
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
50 bags | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Kenya | LOCATION | 0.99+ |
Roger | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
Roger Rajagopal | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
11.3% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
12% | QUANTITY | 0.99+ |
Raj | PERSON | 0.99+ |
20 hours | QUANTITY | 0.99+ |
15 things | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
Reggie Gopaul | PERSON | 0.99+ |
Srinivasan | PERSON | 0.99+ |
two seconds | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
second area | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
9% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
third area | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
40 person | QUANTITY | 0.99+ |
six hours | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
24 people | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Huseman | PERSON | 0.99+ |
Swan Huseman | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Bureau of economic analysis | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
third day | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
six services | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
one system | QUANTITY | 0.99+ |
4 3 Ruha for Transcript
>>Thank you. Thank you so much for having me. I'm thrilled to be in conversation with you today. And I thought I would just kick things off with some opening reflections on this really important session theme, and then we can jump into discussion. So I'd like us to, as a starting point, um, wrestle with these buzz words, empowerment and inclusion so that we can, um, have them be more than kind of big platitudes and really have them reflected in our workplace cultures and the things that we design and the technologies that we put out into the world. And so to do that, I think we have to move beyond techno determinism and I'll explain what that means in just a minute. And techno determinism comes in two forms. The first on your left is the idea that technology automate. Um, all of these emerging trends are going to harm us are going to necessarily, um, harm humanity. >>They're going to take all the jobs they're going to remove human agency. This is what we might call the techno dystopian version of the story. And this is what Hollywood loves to sell us in the form of movies like the matrix or Terminator. The other version on your right is the techno utopian story that technologies automation, the robots, as a shorthand are going to save humanity. They're going to make everything more efficient, more equitable. And in this case, on the surface, they seem like opposing narratives, right? They're telling us different stories. At least they have different endpoints, but when you pull back the screen and look a little bit more closely, you see that they share an underlying logic, that technology is in the driver's seat and that human beings, that social society can just respond to what's happening. But we don't, I really have a say in what technologies are designed. >>And so to move beyond techno determinism, the notion that technology is in the driver's seat, we have to put the human agents and agencies back into the story protagonists and think carefully about what the human desires, worldviews values assumptions are that animate the production of technology. We have to put the humans behind the screen back into view. And so that's a very first step in when we do that. We see as was already mentioned that it's a very homogenous group right now in terms of who gets the power and the resources to produce the digital and physical infrastructure that everyone else has to live with. And so, as a first step, we need to think about how to, to create more participation of those who are working behind the scenes to design technology. Now, to dig a little more deeper into this, I want to offer a kind of low tech example before we get to the more high tech ones. >>So what you see in front of you here is a simple park bench public it's located in Berkeley, California, which is where I went to graduate school. And on this one particular visit, I was living in Boston. And so I was back in California, it was February, it was freezing where I was coming from. And so I wanted to take a few minutes in between meetings to just lay out in the sun and soak in some vitamin D. And I quickly realized actually I couldn't lay down on the bench because of the way it had been designed with these arm rests at intermittent intervals. And so here I thought, okay, th th the armrests have a functional reason why they're there. I mean, you could literally rest your elbows there, or, um, you know, it can create a little bit of privacy of someone sitting there that you don't know. >>Um, when I was nine months pregnant, it could help me get up and down or for the elderly the same thing. So it has a lot of functional reasons, but I also thought about the fact that it prevents people who are, are homeless from sleeping on the bench. And this is the Bay area that we're talking about, where in fact, the tech boom has gone hand in hand with a housing crisis. Those things have grown in tandem. So innovation has grown with inequity because we have, I haven't thought carefully about how to address the social context in which technology grows and blossoms. And so I thought, okay, this crisis is growing in this area. And so perhaps this is a deliberate attempt to make sure that people don't sleep on the benches by the way that they're designed and where the, where they're implemented. And so this is what we might call structural inequity, by the way something is designed. >>It has certain yeah. Affects that exclude or harm different people. And so it may not necessarily be the intent, but that's the effect. And I did a little digging and I found, in fact, it's a global phenomenon, this thing that architect next call, hostile architecture around single occupancy, benches and Helsinki. So only one booty at a time, no Nolan down there. I've found caged benches in France. Yeah. And in this particular town, what's interesting here is that the mayor put these benches out in this little shopping Plaza and within 24 hours, the people in the town rally together and have them removed. So we see here that just because we, we have a discriminatory design in our public space, doesn't mean we have to live with it. We can actually work together to ensure that our public space reflects our better values. But I think my favorite example of all is the metered bench. >>And then this case, this bench is designed with spikes in them and to get the spikes to retreat into the bench, you have to feed the meter. You have to put some coins in, and I think it buys you about 15, 20 minutes, then the spikes come back up. And so you will be happy to know that in this case, uh, this was designed by a German artist to get people to think critically about issues of design, not the design of physical space, but the design of all kinds of things, public policies. And so we can think about how our public life in general is metered, that it serves those that can pay the price and others are excluded or harmed. Whether we're talking about education or healthcare. And the meter bench also presents something interesting for those of us who care about technology, it creates a technical fix for a social problem. >>In fact, it started out as art, but some municipalities in different parts of the world have actually adopted this in their public spaces, in their parks in order to deter so-called loiters from using that space. And so by a technical fix, we mean something that creates a short-term effect, right? It gets people who may want to sleep on it out of sight. They're unable to use it, but it doesn't address the underlying problems that create that need to sleep outside of the first place. And so, in addition to techno determinism, we have to think critically about technical fixes, that don't address the underlying issues that the tech tech technology is meant to solve. And so this is part of a broader issue of discriminatory design, and we can apply the bench metaphor to all kinds of things that we work with, or that we create. >>And the question we really have to continuously ask ourselves is what values are we building in to the physical and digital infrastructures around us? What are the spikes that we may unwittingly put into place? Or perhaps we didn't create the spikes. Perhaps we started a new job or a new position, and someone hands us something, this is the way things have always been done. So we inherit the spiked bench. What is our responsibility? When we notice that it's creating these kinds of harms or exclusions or technical fixes that are bypassing the underlying problem, what is our responsibility? All of this came to a head in the context of financial technologies. I don't know how many of you remember these high profile cases of tech insiders and CEOs who applied for apples, >>The Apple card. And in one case, a husband and wife applied, and the husband, the husband received a much higher limit, almost 20 times the limit as his, >>His wife, even though they shared bank accounts, they lived in common law state. Yeah. >>And so the question there was not only the fact that >>The husband was receiving a much better rate and a high and a better >>The interest rate and the limit, but also that there was no mechanism for the individuals involved to dispute what was happening. They didn't even know how, what the factors were that they were being judged that was creating this form of discrimination. So >>In terms of financial technologies, it's not simply the outcome, that's the issue, or that can be discriminatory, >>But the process that black box is all of the decision-making that makes it so that consumers and the general public have no way to question it, no way to understand how they're being judged adversely. And so it's the process, not only the product that we have to care a lot about. And so the case of the Apple card is part of a much broader phenomenon >>Of, um, races >>And sexist robots. This is how the headlines framed it a few years ago. And I was so interested in this framing because there was a first wave of stories that seemed to be shocked at the prospect, that technology is not neutral. Then there was a second wave of stories that seemed less surprised. Well, of course, technology inherits its creators biases. And now I think we've entered a phase of attempts to override and address the default settings of so-called racist and sexist robots for better or worse than here. Robots is just a kind of shorthand that the way that people are talking about automation and emerging technologies more broadly. And so, as I was encountering these headlines, I was thinking about how these are not problems simply brought on by machine learning or AI. They're not all brand new. And so I wanted to contribute to the conversation, a kind of larger context and a longer history for us to think carefully about the social dimensions of technology. And so I developed a concept called the new Jim code, >>Which plays on the phrase, >>Jim Crow, which is the way that the regime of white supremacy and inequality in this country was defined in a previous era. And I wanted us to think about how that legacy continues to haunt the present, how we might be coding bias into emerging technologies and the danger being that we imagine those technologies to be objective. And so this gives us a language to be able to name this phenomenon so that we can address it and change it under this larger umbrella of the new Jim code are four distinct ways that this phenomenon takes shape from the more obvious engineered inequity. Those are the kinds of inequalities tech mediated in the qualities that we can generally see coming. They're kind of obvious, but then we go down the line and we see it becomes harder to detect it's happening in our own backyards, it's happening around us. And we don't really have a view into the black box. And so it becomes more insidious. And so in the remaining couple of minutes, I'm just, just going to give you a taste of the last three of these, and then a move towards conclusion. Then we can start chatting. So when it comes to default discrimination, this is the way that social inequalities >>Become embedded in emerging technologies because designers of these technologies, aren't thinking carefully about history and sociology. A great example of this, uh, came to, um, uh, the headlines last fall when it was found that widely used healthcare algorithm, effecting millions of patients, um, was discriminating against black patients. And so what's especially important to note here is that this algorithm, healthcare algorithm does not explicitly take note of race. That is to say it is race neutral by using cost to predict healthcare needs this digital triaging system unwittingly reproduces health disparities, because on average, black people have incurred fewer costs for a variety of reasons, including structural inequality. So in my review of this study, by Obermeyer and colleagues, I want to draw attention to how indifference to social reality can be even more harmful than malicious intent. It doesn't have to be the intent of the designers to create this effect. >>And so we have to look carefully at how indifference is operating and how race neutrality can be a deadly force. When we move on to the next iteration of the new Jim code, coded exposure, there's a tension because on the one hand, you see this image where the darker skin individual is not being detected by the facial recognition system, right on the camera, on the computer. And so coded exposure names, this tension between wanting to be seen and included and recognized whether it's in facial recognition or in recommendation systems or in tailored advertising. But the opposite of that, the tension is with when you're over, it >>Included when you're surveilled, when you're >>Too centered. And so we should note that it's not simply in being left out, that's the problem, but it's in being included in harmful ways. And so I want us to think carefully about the rhetoric of inclusion and understand that inclusion is not simply an end point, it's a process, and it is possible to include people in harmful processes. And so we want to ensure that the process is not harmful for it to really be effective. The last iteration of the new Jim code. That means the, the most insidious let's say is technologies that are touted as helping us address bias. So they're not simply including people, but they're actively working to address bias. And so in this case, there are a lot of different companies that are using AI to hire, uh, create hiring, um, software and hiring algorithms, including this one higher view. >>And the idea is that there there's a lot that, um, AI can keep track of that human beings might miss. And so, so the software can make data-driven talent decisions after all the problem of employment discrimination is widespread and well-documented, so the logic goes, wouldn't this be even more reason to outsource decisions to AI? Well, let's think about this carefully. And this is the idea of techno benevolence, trying to do good without fully reckoning with what, how technology can reproduce inequalities. So some colleagues of mine at Princeton, um, tested a natural learning processing algorithm and was looking to see whether it exhibited the same, um, tendencies that psychologists have documented among humans. And what they found was that in fact, the algorithm associated black names with negative words and white names with pleasant sounding words. And so this particular audit builds on a classic study done around 2003 before all of the emerging technologies were on the scene where two university of Chicago economists sent out thousands of resumes to employers in Boston and Chicago. >>And all they did was change the names on those resumes. All of the other work history education were the same. And then they waited to see who would get called back and the applicants, the fictional applicants with white sounding names received 50% more callbacks than the, the black applicants. So if you're presented with that study, you might be tempted to say, well, let's let technology handle it since humans are so biased. But my colleagues here in computer science found that this natural language processing algorithm actually reproduced those same associations with black and white names. So two with gender coded words and names as Amazon learned a couple years ago, when its own hiring algorithm was found discriminating against women, nevertheless, it should be clear by now why technical fixes that claim to bypass human biases are so desirable. If only there was a way to slay centuries of racist and sexist demons with a social justice bot beyond desirable, more like magical, magical for employers, perhaps looking to streamline the grueling work of recruitment, but a curse from any job seekers as this headline puts it. >>Your next interview could be with a racist bot, bringing us back to that problem space. We started with just a few minutes ago. So it's worth noting that job seekers are already developing ways to subvert the system by trading answers to employers tests and creating fake applications as informal audits of their own. In terms of a more collective response. There's a Federation of European trade unions call you and I global that's developed a charter of digital rights for workers that touches on automated and AI based decisions to be included in bargaining agreements. And so this is one of many efforts to change the ecosystem, to change the context in which technology is being deployed to ensure more protections and more rights for everyday people in the U S there's the algorithmic accountability bill that's been presented. And it's one effort to create some more protections around this ubiquity of automated decisions. >>And I think we should all be calling for more public accountability when it comes to the widespread use of automated decisions. Another development that keeps me somewhat hopeful is that tech workers themselves are increasingly speaking out against the most egregious forms of corporate collusion with state sanctioned racism. And to get a taste of that, I encourage you to check out the hashtag tech, won't build it among other statements that they've made and walking out and petitioning their companies. One group said as the, at Google at Microsoft wrote as the people who build the technologies that Microsoft profits from, we refuse to be complicit in terms of education, which is my own ground zero. Um, it's a place where we can, we can grow a more historically and socially literate approach to tech design. And this is just one resource that you all can download, um, by developed by some wonderful colleagues at the data and society research Institute in New York. >>And the, the goal of this intervention is threefold to develop an intellectual understanding of how structural racism operates and algorithms, social media platforms and technologies not yet developed and emotional intelligence concerning how to resolve racially stressful situations within organizations and a commitment to take action, to reduce harms to communities of color. And so as a final way to think about why these things are so important, I want to offer, uh, a couple last provocations. The first is pressed to think a new about what actually is deep learning when it comes to computation. I want to suggest that computational depth when it comes to AI systems without historical or social depth is actually superficial learning. And so we need to have a much more interdisciplinary, integrated approach to knowledge production and to observing and understanding patterns that don't simply rely on one discipline in order to map reality. >>The last provocation is this. If as I suggested at the start in the inequity is woven into the very fabric of our society. It's built into the design of our, our policies, our physical infrastructures, and now even our digital infrastructures. That means that each twist coil and code is a chance for us to weave new patterns, practices, and politics. The vastness of the problems that we're up against will be their undoing. Once we, that we are pattern makers. So what does that look like? It looks like refusing colorblindness as an anecdote to tech media discrimination, rather than refusing to see difference. Let's take stock of how the training data and the models that we're creating. Have these built in decisions from the past that have often been discriminatory. It means actually thinking about the underside of inclusion, which can be targeting and how do we create a more participatory rather than predatory form of inclusion. And ultimately it also means owning our own power in these systems so that we can change the patterns of the past. If we're, if we inherit a spiked bench, that doesn't mean that we need to continue using it. We can work together to design more, just an equitable technologies. So with that, I look forward to our conversation.
SUMMARY :
And so to do that, I think we have to move And this is what Hollywood loves And so to move beyond techno determinism, the notion that technology is in the driver's seat, And so I was back in California, it was February, And so this is what we might call structural inequity, And so it may not necessarily be the intent, And so we can think about how our public life in general is metered, And so, in addition to techno determinism, we have to think critically about And the question we really have to continuously ask ourselves is what values And in one case, a husband and wife applied, and the husband, Yeah. the individuals involved to dispute what was happening. And so it's the process, And so I developed a concept called the new Jim code, And so in the remaining couple of minutes, I'm just, just going to give you a taste of the last three of And so what's especially And so we have to look carefully at how indifference is operating and how race neutrality can And so we should note that it's not simply in being left And the idea is that there there's a lot that, um, AI can keep track of that All of the other work history education were the same. And so this is one of many efforts to change the ecosystem, And I think we should all be calling for more public accountability when it comes And so we need to have a much more interdisciplinary, And ultimately it also means owning our own power in these systems so that we can change
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
California | LOCATION | 0.99+ |
France | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Obermeyer | PERSON | 0.99+ |
nine months | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
Berkeley, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
February | DATE | 0.99+ |
one case | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Federation of European trade unions | ORGANIZATION | 0.99+ |
one resource | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two forms | QUANTITY | 0.99+ |
Jim Crow | PERSON | 0.99+ |
Jim | PERSON | 0.98+ |
U S | LOCATION | 0.98+ |
millions of patients | QUANTITY | 0.98+ |
Apple | ORGANIZATION | 0.97+ |
20 minutes | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Ruha | PERSON | 0.97+ |
apples | ORGANIZATION | 0.97+ |
Terminator | TITLE | 0.96+ |
each twist | QUANTITY | 0.96+ |
one discipline | QUANTITY | 0.95+ |
Helsinki | LOCATION | 0.95+ |
two university | QUANTITY | 0.95+ |
one booty | QUANTITY | 0.95+ |
four distinct ways | QUANTITY | 0.95+ |
Hollywood | ORGANIZATION | 0.94+ |
24 hours | QUANTITY | 0.94+ |
One group | QUANTITY | 0.94+ |
almost 20 times | QUANTITY | 0.93+ |
centuries | QUANTITY | 0.93+ |
few minutes ago | DATE | 0.93+ |
about 15 | QUANTITY | 0.92+ |
one effort | QUANTITY | 0.92+ |
first | EVENT | 0.92+ |
German | OTHER | 0.91+ |
couple years ago | DATE | 0.9+ |
Nolan | PERSON | 0.88+ |
three | QUANTITY | 0.88+ |
50% more | QUANTITY | 0.87+ |
matrix | TITLE | 0.87+ |
last fall | DATE | 0.87+ |
few years ago | DATE | 0.86+ |
Bay | LOCATION | 0.86+ |
2003 | DATE | 0.82+ |
data and society research Institute | ORGANIZATION | 0.8+ |
single occupancy | QUANTITY | 0.79+ |
couple | QUANTITY | 0.78+ |
second wave | EVENT | 0.76+ |
thousands of resumes | QUANTITY | 0.76+ |
Princeton | ORGANIZATION | 0.72+ |
first place | QUANTITY | 0.71+ |
wave | EVENT | 0.48+ |
AIOps Virtual Forum 2020 | Panel
>>From around the globe with digital coverage brought to you by Broadcom. >>So our final segment today, so we've discussed today, the value that AI ops will bring to organizations in 2021, we'll discuss that through three different perspectives. And so now we want to bring those perspectives together and see if we can get a consensus on where AI ops needs to go for folks to be successful with it in the future. So bringing back some folks Richland is back with us. Senior analysts, serving infrastructure and operations professionals at Forester with smartness here is also back global product management at Verizon and Srinivasan, Reggie Gopaul head of product and strategy at Broadcom guys. Great to have you back. So let's jump in and Richard, we're going to, we're going to start with you, but we are going to get all three of you, a chance to answer the questions. So rich, we've talked about why organizations should adopt AI ops, but what happens if they choose not to what challenges would they face? Basically what's the cost of organizations doing nothing. >>Yeah. So it's a really good question because I think in operations for a number of reviews, we've kind of stand, uh, stood Pat, where we are, where we're afraid change things sometimes. Or we just don't think about a tooling is often the last thing to change because we're spending so much time doing project work and modernization and fighting fires on a daily basis. Uh, that problem is going to get worse if we do nothing. Um, you know, we're building new architectures like containers and microservices, which means more things to mind and keep running. Um, we're building highly distributed systems where you got moving more and more into this hybrid world, the multicloud world, uh, it's become over-complicate and I'll give a short anecdote. I think, eliminate this. Um, when I go to conferences and give speeches, it's all infrastructure operations people. And I say, you know, how many people have three X, five X, you know, uh, things to monitor them. They had, you know, three years ago, two years ago, and everyone's hand goes up, say how many people have hired more staff in that time period. Zero hands go up. That's the gap we have to fill. And we have to fill that through better automation, more intelligent systems. It's the only way we're going to be able to feel back out. >>What's your perspective, uh, if organizations choose not to adopt AI ops. >>Yeah. >>That's pretty good. So I'll do that. >>Yeah. So I think it said, I say it's related to a couple of things that probably everybody tired off lately and everybody can relate to. And this would resonate that we'd have 5g, which is old set to transform the one that we know it, of communication with these smart cities, smart communities, IOT, which is going to become pivotal to the success of businesses. And as we seen with this, COVID, you know, transformation of the world that there's a, there's a much bigger cost consciousness out there. People are trying to become much more, forward-looking much more sustainable. And I think at the heart of all of this, that the necessity that you have intelligent systems, which are bastardizing more than enough information that previous equipment overlooked, because if you don't measure engagement, not going right. People love being on the same page of this using two examples for hundreds of things that play a part in things not coming together in the best possible way. So I think it has an absolute necessity to grind those cost efficiencies rather than, you know, left right and center laying off people who are like pit Mattel to your business and have a great tribal knowledge of your business. So to speak, you can drive these efficiencies through automating a lot of those tasks that previously were being very manually intensive or resource intensive. And you could allocate those resources towards doing much better things, which let's be very honest going into 20, 21, after what we've seen with 2020, it's going to be mandate >>Shaking your head there when you, his mom was sharing his thoughts. What are your thoughts about this sounds like you agree. Yeah. I mean, uh, you know, uh, to put things in perspective, right? I mean, we are firmly in the digital economy, right? Digital economy, according to the Bureau of economic analysis is 9% of the us GDP. Just, you know, think about it in, in, in, in, in the context of the GDP, right? It's only ranked lower, slightly lower than manufacturing, which is at 11.3% GDP and slightly about finance and insurance, which is about seven and a half percent GDP. So G the digital economy is firmly in our lives, right? And so someone was talking about it, you know, software eats the world and digital, operational excellence is critical for customers, uh, to, uh, you know, to, uh, to drive profitability and growth, uh, in the digital economy. >>It's almost, you know, the key is digital at scale. So when, uh, when rich talks about some of the challenges and when newsman highlights 5g, as an example, those are the things that, that, that come to mind. So to me, what is the cost or perils of doing nothing? You know, uh, it's not an option. I think, you know, more often than not, uh, you know, C-level execs are asking their head of it and they are key influencers, a single question, are you ready? Are you ready in the context of addressing spikes in networks because of, uh, the pandemic scenario, are you ready in the context of automating away toil? Are you ready to respond rapidly to the needs of the digital business? I think AI ops is critical. >>That's a great point, Roger, where gonna stick with you. So we got kind of consensus there, as you said, wrapping it up. This is basically a, not an option. This is a must to go forward for organizations to be successful. So let's talk about some quick wins. So as you talked about, you know, organizations and C-levels asking, are you ready? What are some quick wins that that organizations can achieve when they're adopting AI? >>You know, um, immediate value. I think I would start with a question. How often do your customers find problems in your digital experience before you think about that? Right. You know, if you, if you, you know, there's an interesting web, uh, website, um, uh, you know, down detector.com, right? I think, uh, in, in Europe, there is an equal amount of that as well. It ha you know, people post their digital services that are down, whether it's a bank that, uh, you know, customers are trying to move money from checking account, the savings account and the digital services are down and so on and so forth. So some and many times customers tend to find problems before it operation teams do. So. A quick win is to be proactive and immediate value is visibility. If you do not know what is happening in your complex systems that make up your digital supply chain, it's going to be hard to be responsive. So I would start there >>Vice visibility. There's some question over to you from Verizon's perspective, quick wins. >>Yeah. So I think first of all, there's a need to ingest this multi-layered monetize spectrum data, which I don't think is humanly possible. You don't have people having expertise, you know, all seven layers of the OSI model and then across network and security and at the application of it. So I think you need systems which are now able to get that data. It shouldn't just be wasted reports that you're paying for on a monthly basis. It's about time that you started making the most of those in the form of identifying what are the efficiencies within your ecosystem. First of all, what are the things, you know, which could be better utilized subsequently you have the opportunity to reduce the noise of a troubled tickets handle. It sounds pretty trivial, but as an average, you can imagine every shop is tickets has the cost in dollars, right? >>So, and there's so many tickets and there's desserts that get on a network and across an end-user application value chain, we're talking thousands, you know, across and end user application value chain could be million in a year. So, and so many of those are not really, you know, cause of concern because the problem is somewhere else. So I think that whole triage is an immediate cost saving and the bigger your network, the bigger the cost of whether you're a provider, whether you're, you know, the end customer at the end of the day, not having to deal with problems, which nobody can resolve, which are not meant to be dealt with. If so many of those situations, right, where service has just been adopted, which is coordinate quality, et cetera, et cetera. So many reasons. So those are the, those are some of the immediate cost savings. >>They are really, really significant. Secondly, I would say Raj mentioned something about, you know, the end user application value chain and an understanding of that, especially with this hybrid cloud environment, et cetera, et cetera, right? The time it takes to identify a problem in an end-user application value chain across the seven layers that I mentioned with the OSI reference model across network and security and the application environment, it's something that in its own self has a massive cost to business, right? They could be point of sale transactions that could be obstructed because of this. There could be, and I'm going to use a very interesting example. When we talk IOT, the integrity of the IOT machine is extremely pivotal in this new world that we're stepping into. You could be running commands, which are super efficient. He has, everything is being told to the machine really fast. >>We're sending everything there. What if it's hacked? And if that robotic arm starts to involve the things you don't want it to do. So there's so much of that. That becomes a part of this naturally. And I believe, yes, this is not just like from a cost saving standpoint, but anything going wrong with that code base, et cetera, et cetera. These are massive costs to the business in the form of the revenue. They have lost the perception in the market as a result, the fed, you know, all that stuff. So these are a couple of very immediate funds, but then you also have the whole player virtualized resources where you can automate the allocation, you know, the quantification of an orchestration of those virtualized resources, rather than a person having to, you know, see something and then say, Oh yeah, I need to increase capacity over here, because then it's going to have this particular application. You have systems doing this stuff to, you know, Roger's point your customer should not be identifying your problems before you, because this digital where it's all about perception. >>Absolutely. We definitely don't want the customers finding it before. So rich, let's wrap this particular question up with you from that analyst perspective, how can companies use make big impact quickly with AI? >>Yeah, I think, you know, and it has been really summed up some really great use cases there. I think with the, uh, you know, one of the biggest struggles we've always had in operations is isn't, you know, the mean time to resolve. We're pretty good at resolving the things. We just have to find the thing we have to resolve. That's always been the problem and using these advanced analytics and machine learning algorithms now across all machine and application data, our tendency as humans is to look at the console and say, what's flashing red. That must be what we have to fix, but it could be something that's yellow, somewhere else, six services away. And we have made things so complicated. And I think this is what it was. One was saying that we can't get there anymore on our own. We need help to get there in all of this stuff that the outline. >>So, so well builds up to a higher level thing of what is the customer experience about what is the customer journey? And we've struggled for years in the digital world and measuring that a day-to-day thing. We know an online retail. If you're having a bad experience at one retailer, you just want your thing. You're going to go to another retailer, brand loyalty. Isn't one of the light. It wasn't the brick and mortal world where you had a department store near you. So you were loyal to that cause it was in your neighborhood, um, online that doesn't exist anymore. So we need to be able to understand the customer from that first moment, they touch a digital service all the way from their, their journey through that digital service, the lowest layer, whether it be a database or the network, what have you, and then back to them again, and we not understand, is that a good experience? >>We gave them. How does that compare to last week's experience? What should we be doing to improve that next week? And I think companies are starting and then the pandemic, certainly you push this timeline. If you listen to the, the, the CEO of Microsoft, he's like, you know, 10 years of digital transformation written down. And the first several months of this, um, in banks and in financial institutions, I talked to insurance companies, aren't slowing. Now they're trying to speed up. In fact, what they've discovered is that there, obviously when we were on lockdown or what have you, the use of digital services spiked very high. What they've learned is they're never going to go back down. They're never going to return to pretend levels. So now they're stuck with this new reality. Well, how do we service those customers and how do we make sure we keep them loyal to our brand? >>Uh, so, you know, they're looking for modernization opportunities. A lot of that, that things have been exposed. And I think Raj touched upon this very early in the conversation is visibility gaps. Now that we're on the outside, looking in at the data center, we know we architect things in a very specific way. Uh, we better ways of making these correlations across the Sparrow technologies to understand where the problems lies. We can give better services to our customers. And I think that's really what we're going to see a lot of the, the innovation and the people really for these new ways of doing things starting, you know, w now, I mean, I think I've seen it in customers, but I think really the push through the end of this year to next year when, you know, economy and things like that, straighten out a little bit more. I think it really, people are going to take a hard look of where they are is, you know, AI ops the way forward for them. And I think they'll find it. The answer is yes, for sure. >>So we've, we've come to a consensus that, of what the parallels are of organizations, basically the cost of doing nothing. You guys have given some great advice on where some of those quick wins are. Let's talk about something Raj touched on earlier, is organizations, are they really ready for truly automated AI? Raj, I want to start with you readiness factor. What are your thoughts? >>Uh, you know, uh, I think so, you know, we place our, her lives on automated systems all the time, right? In our, in our day-to-day lives, in the, in the digital world. I think, uh, you know, our, uh, at least the customers that I talked to our customers are, uh, are, uh, you know, uh, have a sophisticated systems, like for example, advanced automation is a reality. If you look at social media, AI and ML and automation are used to automate away, uh, misinformation, right? If you look at financial institutions, AI and ML are used to automate away a fraud, right? So I want to ask our customers why can't we automate await oil in it, operation systems, right? And that's where our customers are. Then, you know, uh, I'm a glass half full, uh, clinical person, right? Uh, this pandemic has been harder on many of our customers, but I think what we have learned from our customers is they've Rose to the occasion. >>They've used digital as a key moons, right? At scale. That's what we see with, you know, when, when Huseman and his team talk about, uh, you know, network operational intelligence, right. That's what it means to us. So I think they are ready, the intersection of customer experience it and OT, operational technology is ripe for automation. Uh, and, uh, you know, I, I wanna, I wanna sort of give a shout out to three key personas in, in this mix. It's somewhat right. One is the SRE persona, you know, site, reliability engineer. The other is the information security persona. And the third one is the it operator automation engineer persona. These folks in organizations are building a system of intelligence that can respond rapidly to the needs of their digital business. We at Broadcom, we are in the business of helping them construct a system of intelligence that will create a human augmented solution for them. Right. So when I see, when I interact with large enterprise customers, I think they, they, you know, they, they want to achieve what I would call advanced automation and AI ML solutions. And that's squarely, very I ops is, you know, is going as an it, you know, when I talked to rich and what, everything that rich says, you know, that's where it's going. And that's what we want to help our customers to. >>So rich, talk to us about your perspective of organizations being ready for truly automated AI. >>I think, you know, the conversation has shifted a lot in the last, in, in pre pandemic. Uh, I'd say at the end of last year, we're, you know, two years ago, people I'd go to conferences and people come up and ask me like, this is all smoke and mirrors, right? These systems can't do this because it is such a leap forward for them, for where they are today. Right. We we've sort of, you know, in software and other systems, we iterate and we move forward slowly. So it's not a big shock. And this is for a lot of organizations that big, big leap forward in the way that they're running their operations teams today. Um, but now they've come around and say, you know what? We want to do this. We want all the automations. We want my staff not doing the low complexity, repetitive tasks over and over again. >>Um, you know, and we have a lot of those kinds of legacy systems. We're not going to rebuild. Um, but they need certain care and feeding. So why are we having operations? People do those tasks? Why aren't we automating those out? I think the other piece is, and I'll, I'll, I'll send this out to any of the operations teams that are thinking about going down this path is that you have to understand that the operations models that we're operating under in INO and have been for the last 25 years are super outdated and they're fundamentally broken for the digital age. We have to start thinking about different ways of doing things and how do we do that? Well, it's, it's people, organization, people are going to work together differently in an AI ops world, um, for the better, um, but you know, there's going to be the, the age of the 40 person bridge call thing. >>Troubleshooting is going away. It's going to be three, four, five focused engineers that need to be there for that particular incident. Um, a lot of process mailer process we have for now level one level, two engineering. What have you running of tickets, gathering of artifacts, uh, during an incident is going to be automated. That's a good thing. We shouldn't be doing those, those things by hand anymore. So I'd say that the, to people's like start thinking about what this means to your organization. Start thinking about the great things we can do by automating things away from people, having to do them over and over again. And what that means for them, getting them matched to what they want to be doing is high level engineering tasks. They want to be doing monitorization, working with new tools and technologies. Um, these are all good things that help the organization perform better as a whole >>Great advice and great kind of some of the thoughts that you shared rich for what the audience needs to be on the, for going on. I want to go over to you, give me your thoughts on what the audience should be on the lookout for, or put on your agendas in the next 12 months. >>So there's like a couple of ways to answer that question. One thing would be in the form of, you know, what are some of the things they have to be concerned about in terms of implementing this solution or harnessing its power. The other one could be, you know, what are the perhaps advantages they should look to see? So if I was to talk about the first one, let's say that, what are some of the things you have to watch out for like possible pitfalls that everybody has data, right? So yeah, that's one strategy, we'd say, okay, you've got the data, let's see what we can do with them. But then there's the exact opposite side, which has to be considered when you're doing that analysis that, Hey, what are the use cases that you're looking to drive, right? But then use cases you have to understand, are you taking a reactive use case approach? >>Are you taking quite active use cases, right? Or that that's a very, very important consideration. Then you have to be very cognizant of where does this data that you have vision, it reside, what are the systems and where does it need to go to in order for this AI function to happen and subsequently if there needs to be any, you know, backward communication with all of that data in a process better. So I think these are some of the very critical points because you can have an AI solution, which is sitting in a customer data center. It could be in a managed services provider data center, like, right, right. It could be in a cloud data center, like an AWS or something, or you could have hybrid scenarios, et cetera, all of that stuff. So you have to be very mindful of where you're going to get the data from is going to go to what are the use cases you're trying to, you have to do a bit of backward forward. >>Okay. We've got this data cases and I think it's the judgment. Nobody can come in and say, Hey, you've built this fantastic thing. It's like Terminator two. I think it's a journey where we built starting with the network. My personal focus always comes down to the network and with 5g so much, so much more right with 5g, you're talking low latency communication. That's like the two power of 5g, right? It's low latency, it's ultra high bandwidth, but what's the point of that low latency. If then subsequently the actions that need to be taken to prevent any problems in critical applications, IOT applications, remote surgeries, uh, test driving vehicles, et cetera, et cetera. What if that's where people are sitting and sipping their coffees and trying to take action that needs to be in low latency as well. Right? So these are, I think some of the fundamental things that you have to know your data, your use cases and location, where it needs to be exchanged, what are the parameters around that for extending that data? >>And I think from that point onward, it's all about realizing, you know, in terms of business outcomes, unless AI comes in as a digital labor, that shows you, I have, I have reduced your, this amount of, you know, time, and that's a result of big problems or identified problems for anything. Or I have saved you this much resource right in a month, in a year, or whatever, the timeline that people want to see it. So I think those are some of the initial starting points, and then it all starts coming together. But the key is it's not one system that can do everything. You have to have a way where, you know, you can share data once you've got all of that data into one system, maybe you can send it to another system and make more, take more advantage, right? That system might be an AI and IOT system, which is just looking at all of your streetlights and making sure that Hey, parent switched off just to be more carbon neutral and all that great stuff, et cetera, et cetera >>For the audience, you can take her Raj, take us time from here. What are some of the takeaways that you think the audience really needs to be laser focused on as we move forward into the next year? You know, one thing that, uh, I think a key takeaway is, um, uh, you know, as we embark on 2021, closing the gap between intent and outcome and outputs and outcome will become critical, is critical. Uh, you know, especially for, uh, uh, you know, uh, digital transformation at scale for organizations context in the, you know, for customer experience becomes even more critical as Swan Huseman was talking, uh, you know, being network network aware network availability is, is a necessary condition, but not sufficient condition anymore. Right? The what, what, what customers have to go towards is going from network availability to network agility with high security, uh, what we call app aware networks, right? >>How do you differentiate between a trade, a million dollar trade that's happening between, uh, you know, London and New York, uh, versus a YouTube video training that an employee is going through? Worse is a YouTube video that millions of customers are, are watching, right? Three different context, three different customer scenarios, right? That is going to be critical. And last but not least feedback loop, uh, you know, responsiveness is all about feedback loop. You cannot predict everything, but you can respond to things faster. I think these are sort of the three, uh, three things that, uh, that, uh, you know, customers are going to have to, uh, have to really think about. And that's also where I believe AI ops, by the way, AI ops and I I'm. Yeah. You know, one of the points that was smart, shout out to what he was saying was heterogeneity is key, right? There is no homogeneous tool in the world that can solve problems. So you want an open extensible system of intelligence that, that can harness data from disparate data sources provide that visualization, the actionable insight and the human augmented recommendation systems that are so needed for, uh, you know, it operators to be successful. I think that's where it's going. >>Amazing. You guys just provided so much content context recommendations for the audience. I think we accomplished our goal on this. I'll call it power panel of not only getting to a consensus of what, where AI ops needs to go in the future, but great recommendations for what businesses in any industry need to be on the lookout for rich Huisman Raj, thank you for joining me today. >>Pleasure. Thank you. Thank you. >>We want to thank you for watching. This was such a rich session. You probably want to watch it again. Thanks for your time.
SUMMARY :
to you by Broadcom. Great to have you back. And I say, you know, how many people have three X, five X, you know, uh, things to monitor them. So I'll do that. necessity to grind those cost efficiencies rather than, you know, left right and center laying off I mean, uh, you know, uh, to put things in perspective, right? I think, you know, more often than not, So we got kind of consensus there, as you said, uh, website, um, uh, you know, down detector.com, There's some question over to you from Verizon's perspective, First of all, what are the things, you know, which could be better utilized you know, cause of concern because the problem is somewhere else. about, you know, the end user application value chain and an understanding of that, You have systems doing this stuff to, you know, Roger's point your customer up with you from that analyst perspective, how can companies use I think with the, uh, you know, one of the biggest struggles we've always had in operations is isn't, So you were loyal to that cause it was in your neighborhood, um, online that doesn't exist anymore. And I think companies are starting and then the pandemic, certainly you push this timeline. people are going to take a hard look of where they are is, you know, AI ops the way forward for them. Raj, I want to start with you readiness factor. I think, uh, you know, our, And that's squarely, very I ops is, you know, is going as an it, Uh, I'd say at the end of last year, we're, you know, two years ago, people I'd and I'll, I'll, I'll send this out to any of the operations teams that are thinking about going down this path is that you have to understand So I'd say that the, to people's like start thinking about what this means Great advice and great kind of some of the thoughts that you shared rich for what the audience needs to be on the, One thing would be in the form of, you know, what are some of the things they have to be concerned subsequently if there needs to be any, you know, backward communication with all of that data in a process you have to know your data, your use cases and location, where it needs to be exchanged, this amount of, you know, time, and that's a result of big problems or uh, uh, you know, uh, digital transformation at scale for organizations context systems that are so needed for, uh, you know, it operators to be successful. for rich Huisman Raj, thank you for joining me today. Thank you. We want to thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Roger | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Reggie Gopaul | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
11.3% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Huseman | PERSON | 0.99+ |
9% | QUANTITY | 0.99+ |
Raj | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
40 person | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Bureau of economic analysis | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
next year | DATE | 0.99+ |
Swan Huseman | PERSON | 0.99+ |
Three | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
today | DATE | 0.99+ |
London | LOCATION | 0.99+ |
three things | QUANTITY | 0.99+ |
one system | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
three years ago | DATE | 0.98+ |
six services | QUANTITY | 0.98+ |
about seven and a half percent | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
5g | QUANTITY | 0.98+ |
Srinivasan | PERSON | 0.98+ |
Forester | ORGANIZATION | 0.97+ |
first moment | QUANTITY | 0.97+ |
five focused engineers | QUANTITY | 0.97+ |
third one | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.97+ |
a month | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
seven layers | QUANTITY | 0.96+ |
three key personas | QUANTITY | 0.96+ |
single question | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
a year | QUANTITY | 0.93+ |
Secondly | QUANTITY | 0.93+ |
Mattel | ORGANIZATION | 0.93+ |
Zero hands | QUANTITY | 0.92+ |
three different customer scenarios | QUANTITY | 0.92+ |
two examples | QUANTITY | 0.91+ |
First | QUANTITY | 0.91+ |
Terminator two | TITLE | 0.9+ |
pandemic | EVENT | 0.9+ |
one retailer | QUANTITY | 0.9+ |
million in a year | QUANTITY | 0.89+ |
million dollar | QUANTITY | 0.89+ |
one strategy | QUANTITY | 0.89+ |
end | DATE | 0.88+ |
end of this year | DATE | 0.85+ |
detector.com | OTHER | 0.84+ |
next 12 months | DATE | 0.84+ |
millions of customers | QUANTITY | 0.83+ |
five X | QUANTITY | 0.83+ |
last 25 years | DATE | 0.81+ |
Forum | EVENT | 0.8+ |
three X | QUANTITY | 0.8+ |
Sparrow | ORGANIZATION | 0.79+ |
last year | DATE | 0.78+ |
One thing | QUANTITY | 0.77+ |
John Roese, Dell Technologies | Dell Technologies World 2020
(bright music) >> Announcer: From around the globe, it's theCUBE with digital coverage of Dell Technologies World Digital Experience. Brought to you by Dell Technologies. >> Hello, and welcome back to theCUBE's virtual coverage of Dell Technologies World Digital Experience. I'm John Furrier, your host of theCUBE here for this interview. We're not face to face this year, we're remote because of the pandemic. We've got a great guest, CUBE alumni, John Roese who's the Global Chief Technology Officer at Dell Technologies. John, great to see you. Thank you for remoting in from New Hampshire. Thanks for your time and thanks for coming on. >> Oh, glad to be here. Glad to be here from New Hampshire. The travel is a lot easier this way so-- >> It's been an interesting time. What a year it's been with the pandemic, the good, bad, and the ugly has been playing out. But if you look at the role of technology, the big theme this year at Dell Technologies World is the digital transformation acceleration. Everyone is kind of talking about that, but when you unpack the technology side of it, you're seeing a technology enablement theme that is just unprecedented from an acceleration standpoint. COVID has forced people to look at things that they never had to look at before. Disruption to business models and business systems like working at home. (Furrier laughs) Who would have forecasted that kind of disruption. Workloads changing, workforces working differently with in the mid of things. So an absolute exposure to the core issues and challenges that need to be worked on and double down on. And some cases, projects that might not have been as a priority. So you have all of this going on, customers really trying to double down on the things that are working, the things they need to fix, so they can come out of the pandemic with a growth strategy with modern apps, with cloud and hybrid and multicloud. This has been a huge forcing function. I'd love to get your first reaction to that big wave. >> Yeah, no, no, I think as a technologist, sometimes you can see the future maybe a little clearer than the business people can. Because there's one thing about technology, it either is, or it isn't. Either is code or hardware and real or it's marketing. And we knew the technology evolution was occurring, we knew the multicloud world was real, we knew that machine intelligence was real. And we've been working on this for maybe decades. But prior to COVID, many of these areas were still considered risky or speculative. And people couldn't quite grok exactly why they wanted a machine doing work on their behalf or why they might want an AI to be a participant in their collaboration sessions or why they might want an autonomous vehicle at all. And we were talking about how many people autonomous vehicles that were going to kill as opposed to how many that we're going to help. Then we had COVID. And suddenly we realized that the fragility of our physical world and the need for digital is much higher. And so it's actually opened up an enormous accelerant on people's willingness to embrace new technologies. And so whether it's a predictable acceleration of machine intelligence or autonomous systems, or this realization that the cloud world is actually more than one answer, there's multiple clouds working together. Because if you try to do a digital transformation acceleration, you realize that it's not one problem. It's many, many problems all working together, and then you discover that, hey, some of these can be solved with cloud one and some can be solving with cloud two, and some of them you want to do in your own infrastructure, in a private cloud, and some might belong at the edge. And then suddenly you come to this conclusion that, hey, having strategy has to deal with this system as a system. And so across the board, COVID has been an interesting catalyst to get people to really think practically about the technology available to them and how they might be able to take advantage of it quicker. And that's a mixed blessing for us technologists because they want things sooner, and that means we have to do more engineering. But at the same time, open-minded consumers of technology are very helpful in digital transformations. >> Well, I want to unpack that rethinking with COVID and post COVID. I mean, everything is going to come down to before COVID and after COVID world. I think it's going to be the demarcation that's going to be looked at historically. Before we get into that though, I want to get your thoughts on some of the key pillars of these transformational technologies in play today. Last year at Dell World, when we were physically face to face, we were laying out on theCUBE and in our analysis, the Dell Technologies has got an end to end view. You saw a little bit at VMworld this year, the Project Monterey, is looking much more systematically across the board. You mentioned systems as consequences. The reaction of changes. But lay out for us the key areas, the key pillars of the transformational technologies that customers need to look at now to drive the digital path. >> Yeah, we cast a very wide net. We look at literally thousands of technologies, we organize them and we try to understand and predict which ones are going to matter. And it turns out that over the last couple of years, we figured out there's really six, what I'll call expanding technology areas that are actually probably likely to be necessary for almost any digital transformation. And they aren't exactly what people have been doing historically. So in no particular order, and they may sound obvious, but when you think about your future, it's very likely all six of these are going to touch you. The first is, the obvious one of being able to develop and deliver a multicloud. The cloud journey is by no means done. We are at like the second inning of a nine inning game, maybe even earlier. We have barely created the multiple cloud world, much less the true multicloud world, and then really exploiting and automating has work to be done. But that's a strategic area for us and everybody to navigate forward. In parallel to that, what we realize is that multiple cloud is no longer just present in data centers and public clouds, it's actually existing in the real world. So this idea of edge, the reconstituting of IT out in the real world to deliver the real time behavior necessary to actually serve what we predict will be about 70% of the world's data that will happen outside of data centers. The third is 5G. And that's a very specific technology, and I have a long telco background. I was the CTO of one of the largest telecom companies in the world and I was involved in 2G, 3G and 4G. (Furrier chuckles) 5G is not another G. It is not just faster 4G. It does that, but with things like massive machine type communication with having a million sensorized devices in a kilometer or ultra reliable, low latency communication. The ability to get preferential services to critical streams of data across the infrastructure, mobile edge compute, putting the edge IT out into the cellular environment. And the fact that it's built in the cloud and IT era. So it's programmable, software defined. 5G is going to go from being an outside of the IT discussion to being the fabric inside the IT discussion. And so I will bet that anybody who has people in the real world and that they're trying to deliver a digital experience, will have to take advantage of the capabilities of 5G to do it right. But super strategic important area for Dell and for our industry. Continuing on, we have the data world, the data management world. It's funny, we've been doing data as an industry for a very long time, but the world we were in was the data at rest world, databases, data lakes, traditional applications. And that's great. It still matters, but this new world of data in motion is beginning. And what that means is the data is now moving into pipelines. We're not moving it somewhere and then figuring it out, we're figuring it out as the data flows across this multicloud environment. And that requires an entirely different tool, chain, architecture and infrastructure. But it's incredibly important because it's actually the thing that powers most digital transformation if they're real time. In parallel to that, number five on the list is AI and machine learning. And we have a controversial view on this. We don't view AI as purely a technology. It clearly is a technology, but what we really think customers should think about it as is as a new class of user. Because AIs are actually some of the most aggressive producers and consumers of data and consumers of IT infrastructure. We actually estimate that within the next four or five years, the majority of IT capacity in an enterprise environment will actually be consumed at the behest of the machine learning algorithm or an AI system than a traditional application or person. And all you have to do is do one AI project to understand that I'm correct, because they are just massive demand drivers for your infrastructure, but they have massive return on that demand. They give you things you can't do without them. And then last on the list is this area of security. And to be candid, we have really messed up this area as an industry. We have a security product for every problem, we have proliferation of security technologies. And to make matters worse, we now operate most enterprises on the assumption the bad guys are already inside and we're doing things to prevent them from causing harm. Now, if that's all it is, we really lost this one. So we have an obligation to reverse this trend, to start moving back to embedding the security and the infrastructure with intrinsic security, with zero trust models, with things like SASSY, which is basically creating new models of the edge security paradigm to be more agile and software defined. But most importantly, we have to pull it all together and say, "You know what we're really measuring is the trustworthiness "of the systems we work with, "not the individual components." So this elevation of security to trust is going to be a big journey for all of us. And every one of those six are individual areas, but when you combine them, they actually describe the foundation of a digital transformation. And so it's important for people to be aware of them, it's important for companies like Dell to be very active in all of them, because ultimately what you have today, plus those six properly executed, is the digital transformation outcome that most people are heading towards. >> You just packed it all six pillars into one soundbite. That was awesome. Great insight there. One of the things that's interesting, you mentioned AI. I love that piece around AI being a consumer. They are a consumer of data, they're also a consumer of what used to be handled by either systems or humans. That's interesting. 5G is another one. Pat Gelsinger has said at VMworld that 5G, and when I interviewed him he said 5G is a business app, not a consumer app. Yet, if you look at the recent iPhone announcement by Apple, iPhone 12, and iPhone 12 Pro, 5G is at the center of that announcement. But they're taking it from a different perspective. That's a real world application. They've got the watch, they have new chips in their devices, huge advantage. It's not just bandwidth. And remember the original iPhone launch with 3G if you remember. That made the iPhone. Some are saying if it didn't have the 3G or 2G and 3G, I think it was 3G in the first iPhone. 3G, it would have not been as successful. So again, Apple is endorsing 5G. Gelsinger talks about it as a business app. Double down on that, because I think 5G will highlight some of the COVID issues because people are working at home. They're on the go. They want to do video conferencing. Maybe they want to do this programmable. Unpack the importance of 5G as an enabler and as an IT component. >> Yeah. As I mentioned, 5G isn't just about enhanced mobile broadband which is faster YouTube. It's about much more than that. And because of that combination of technologies, it becomes the connective tissue for almost every digital transformation. So our view by the way, just to give you the Dell official position, we actually view that the 5G or the telecom industry is going through three phases around 5G. The first phase has already happened. It was an early deployment of 5G using traditional technology. It was just 5G as an extension of the 4G environment. That's great, it's out there. There's a phase that we're in right now, which I call the geopolitical phase, where all of a sudden, everybody from companies to countries to industries have realized this is really important. And we have to figure out how to make sure we have a secure source of supply that is based on the best technology. And that has created an interest by people like Dell and VMware and Microsoft, and many other companies to say, "Wait a minute. "This isn't just a telecom thing. "This is, as Pat said a business system. "This is part of the core of all digital." And so that's pulled people like Dell and others more aggressively into the telecom world in this middle phase. But what really is happening is the third phase. And the third phase is a recasting of the architecture of telecom to make it much more like the cloud and IT world. To separate hardware from software, to implemented software defined principles, to putting machine interfaces, to treat it like a cloud and IT system architecturally. And that's where things like OpenRAN, integrated open networks, and these new initiatives are coming into play. All of that from Dell perspective is fantastic because what it says is the telecom world is heading towards companies like us. And so, as you may know, we set up a brand new telecom business at scale up here to our other businesses this year. We already are doing billions of dollars in telecom, but now we believe we should be playing a meaningful tier-one role in this modern telecom ecosystem. It will be a team sport. There's lots of other players we have to work with. But because of the breadth of applications of 5G. And whether it's again, an iPhone with 5G is great to do YouTube, but it's incredibly powerful if you run your business applications on there, and what you want to actually deliver is an immersive augmented experience. So without 5G, it will be very hard to do that. So it becomes a new and improved client. We announced a Latitude 9,000 Series, and we're one of the first to put out a 5G enabled laptop. In certain parts of the world, we're now starting to ship these. Well again, when you have access to millimeter wave and gigabit speed capacity, you can do some really interesting things on that device, more oriented towards what we call collaborative computing which the client device and the adjacent infrastructure have so much bandwidth between them, that they look like one system. And they can share the burden of augmented reality, of data processing, of AI processing all in the real time domain. Carry that a little further, and when we get into the areas like healthcare transformation or educational transformation. What we realize immediately is reach is everything. You want to have a premium broadband experience, and you need a better system to do that. But really the thing that has to happen is not just a Zoom call, but an immersive experience in which a combination of low bandwidth, always on sensors are able to send their data streams back. But also, if you want to have a more immersive experience to really exploit your health situation, being able to do it with holography and other tools, which require a lot more bandwidth is critical. So no matter where you go in a digital transformation in the real world that has real people and things out in the real world involved in it, the digital fabric for connectivity is critical. And you suddenly realize the current architecture's pre-5G aren't sufficient. And so 5G becomes this linchpin to basically make sure that the client and the cloud and the data center all have a framework that they can actually work together without, let's call it a buffering resistance between them called the network. Imagine if the network was an enabler, not an impediment. >> Yeah, I think you're on point here. I think this is really teases out to me the next-gen business transformation, digital transformation because if you think about what you just talked about, connective tissue, linchpin with 5G, data as a driver, multicloud, the six pillars you laid out, and you mentioned systems, connective tissue systems. I mean, you're basically talking tech under the hood like operating system mindset. These systems design are interesting. If you put the pieces together, you can create business value. Not so much speeds and feeds, business value. You mentioned telco cloud. I find that fascinating. I've been saying on theCUBE for years, and I think it's finally playing out. I want to get your reactions of this is, this rise of the specialty cloud. I called it tier-one on the power law kind of the second wave of cloud. Look at Snowflake. They went public. Biggest IPO in the history of the New York Stock Exchange of Wall Street, second to VMware. They built on Amazon. (Furrier laughs) Okay. You have the telco cloud, we have theCUBE cloud, we have the media cloud. So you're seeing businesses looking at the cloud as a business model opportunity, not just buying gear to run something faster, right? So you're getting at something here where it's real benefits are now materializing and are now visible. First of all, do you agree with that? I'm sure you do. I'd love to get your thoughts on that. And if you do, how do companies put this together? Because you need software, you got to have the power source with cloud. What's your reaction to that? >> Absolutely. I think, now obviously there are many clouds. We have some mega clouds out there and then we have lots of other specialty clouds. And by the way, sometimes you remember we view cloud as an operating model, an experience, a way to present an IT service. How it's implemented is less important than what it looks like to the user. Your example of Snowflake. I don't view Snowflake as AWS. I view Snowflake as a storage business. (Furrier chuckles) >> It's a business. >> It's a business cloud. I mean, they could lift it up and move it onto another cloud infrastructure and still be Snowflake. So, as we look forward, we do see more of the consumables that we're going to use and digital transformation appearing as these cloud services. Sometimes they're SaaS cloud, sometimes they're an infrastructure cloud, sometimes they're a private cloud. One of the most interesting ones though that we see that hasn't happened yet is the edge clouds that are going to form. Edge is different. It's in the real time domain, it's distributed. If you do it at scale, it might look like massive amounts of capacity, but it isn't infinite in one place. Public cloud is infinite capacity all in one place. An edge cloud is infinite capacity distributed across 50,000 points of presence at which each of them has a finite amount of capacity. And the other difference though, is that edge clouds tend to live in the real time domain. So 30 millisecond round trip latency. Well, the reason this one's exciting to me is that when you think about what happened at the software and business model innovation, when for instance public clouds and even co-location became more accessible, companies who had this idea that needed a very large capacity of infrastructure that could be consumed as a service suddenly came into existence. Salesforce.com go through the laundry list. But all of those examples were non-real time functions because the clouds they were built on were non-real time clouds if you take them in the end to end, in the system perspective. We know that there are going to be both from the telecom operators and from cloud providers and co-location providers, and even enterprises, a proliferation of infrastructure out in the real time domain called edges. And those are going to be organized and delivered as cloud services. They're going to be pools of flexible elastic capacity. What excites me is suddenly we're going to spawn a level of innovation, where people who had this great idea that they needed to access cloud light capacity, but they ran into the problem that the capacity was too far away from the time domain they needed to operate. And we've already seen some examples of this in AR and VR. Autonomous vehicles require a real time cloud near the car, which doesn't exist yet. When we think about things like smart cities and smart factories, they really need to have that cloud capacity in the time domain that matters if they want to be a real time control system. And so, I don't know exactly what the innovation is going to be, but when you see a new capability show up, in this case, it's inevitable that we're going to see pools of elastic, consumable capacity in the real time environment as edges start to form. It's going to spawn another innovation cycle that could be as big as what happened in the public cloud environment for non-real time. >> Well, I think that's a great point in time series. Databases for one would be one instant innovation. You mentioned data, data management, time is valuable to the latency and this maybe not viable after if you're a car, right? So you pass them. So again, all different concepts. And the one thing that, first of all, I agree with you on this whole cloud thing. A nice edge cloud is going to develop nicely. But the question there is it's going to be software defined, agreed. Security, data, you've got databases, you've got software operated. You mentioned security being broken, and security product for every problem. And you want to bake it in, intrinsic or whatever you call it these days. How do you get the security model? Because you've got access. Do you federate that? How do you build in security at that level? Whether it's a space satellite or a moving vehicle, the edge is the edge. So what's your thoughts on security as you're looking at this mobility, this agility is horizontally scalable distributed system. What's the security paradigm? >> Well the first thing, it has nothing to do with security, but impacts your security outcome in a meaningful way when you talk about the edge. And that is, we have got to stop getting confused that an edge is a single monolithic thing. And we have got to start understanding that an edge is actually a combination of two things. It is a platform that will provide the capacity and a workload that will do the job, the code. And today, what we find is many people are advocating for edges are actually delivering an end to end stack that includes bespoke hardware, its infrastructure, and the workloads and capabilities. If that happens, we end up with 1,000 black boxes that all do one thing, which doesn't make any sense out in the real world. So the minute you shift to what the edge is really going to be, which is a combination of edge platforms and edge workloads, you start your journey towards a better security model. First thing that happens is you can secure and make a high integrity the edge platform. You can make sure that that platform has a hardware to trust, that it operates potentially in a zero trust model, that it has survivability and resiliency, but it doesn't really care what's running on it as much as it has to be stable. Now if you get that one right, now at least you have a stable platform between your public and private environments and the edge. At the workload level though, now you have to think about, well, edge workloads actually should not be bloated. They should not be extremely large scale because there's not enough capacity at the edge. So concepts like SASSY is a good example, which is one of the analyst firms that coined that term. But I like the concept, which is, hey, what if at the edge you're delivering the workload, but the workload is protected by a bunch of cloud-oriented security services that effectively are presented as part of the service chain? So you don't have to have your own firewall built into every workload because you're in an edge architecture, you can use virtual firewalling that's coming to you as a software service, or you can use the SDN, the service chain it into the networking path, and then you can provide deep packet inspection and other services. It all goes back to this idea that, when you deal with the edge, first and foremost, you have to have a reliable stable platform to guarantee a robust foundation. And that is an infrastructure security problem. But then you have to basically deal with the security problems of the workload in a different way than you do it in a data center. In a data center, you have infinite computing. You can put all kinds of appendages on your code, and it's fine because there's just more compute next to you. In the edge, we have to keep the code pure. It has to be an analytics engine or an AI engine for systems control in a factory. And the security services actually have to be a function of the end to end path. More likely delivered as software services slightly upstream. That architectural shift is not something people have figured out yet. But if we get it right, now we actually have a modern, zero trust distributed, software defined, service changeable, dynamic security architecture, which is a much better approach to an intrinsic security than trying to just hard-code the security into the workload and tie it to the platform which never has worked. So we're going to have to have a pretty big rethink to get through this. But for me, it's pretty clear what we have to do. >> Now I'd say that's good observation. Great insight. I'll just double down and ask a followup on that. I get that. I see where you're going with that software defined, software operated service. I love the SASSY concept. We've covered it. But the edge is still purpose-built devices. I mean, we've talked about an iPhone, and you're talking about a watch, you're talking about a space module, whatever it is at the edge on a tower, it could be a radio. I mean, whatever it is, you seem to have purpose-built hardware. You mentioned this root of trust. That'll kind of never kind of go away. You're going to have that. What's your thoughts on that as someone who realizes I got to harden the edge, at least from a hardware standpoint, but I want to be enabled for self-defined. I don't want to have a product be purpose-built and then be obsolete in a year. Because that's again the challenge of supply chain management, building hardware. What's your thoughts on that? >> Yeah. Our edge strategy, we double click a little bit is different than the strategy to build for a data center. We want consistency between them, but there's actually five areas of edge that actually are specific to it. The first is the hardware platform itself. Edge hardware platforms are different than the platforms you put in data centers, whether it be a client or the infrastructure underneath it. And so we're already building hardened devices and devices that are optimized for power and cooling and space constraints in that environment. The second is the runtime on that system is likely to be different. Today we use the V Cloud Foundation where that works very well, but as you get smaller and smaller and further away, you have to miniaturize and reduce the footprint. The control plane, we would like to make that consistent. We are using Tanzu and Dell Technologies Cloud Platform to extend out to the edge. And we think that having a consistent control plan is important, but the way you adapt something like Tanzu from the edge is different because it's in a different place. The fourth is life cycle, which is really about how you secure, how you deploy, how you deal with day two operations. There's no IT person out at the edge, so you're not in a data center. So you have to automate those systems and deal with them in a different way. And then lastly, the way you package an edge solution and deliver it is much different than the way you build a data center. You actually don't want to deal with those four things I just described as individual snowflakes. You want them packaged and delivered as an outcome. And that's why more and more of the edge platform offerings are really cloudlets or they're a platform that you can use to extend your IT capacity without having to think about Kubernetes versus VMs versus other things. It's just part of the infrastructure. So all of that tells us that edge is different enough, that the way you designed for it, the way you implement it, and even the life cycle, it has to take into account that it's not in a data center. The trick is to then turn that into an extended multicloud where the control plane is consistent, or when you push code into production with Kubernetes, you can choose to land that container in a data center or push it out to the edge. So you have both a system consistency goal, but also the specialization of the edge environment. Everything from hardware, to control plane, to lifecycle, that's the reality of how these things have to be built. >> That's a great point. It's a systems architecture, whether you're looking at from the bottoms up component level to top down kind of policy and or software defined. So great insight. I wish we had more time. I'd love to get you back and talk about data. We were talking before you came on camera about data. But quickly before we go, your thoughts on AI and the consequences of AI. AI is a consumer. I love that insight. Totally agree. Certainly it's an application. Technology is kind of horizontal. It can be vertically specialized with data. What's your thoughts on how AI can be better for society and some of the unintended consequences that we manage that. >> Yeah, I'm an optimist. I actually, we've worked with enough AI systems for long enough to see the benefit. Every one of Dell's products today has machine intelligence inside of it. So we can exceed the potential of its hardware and software without it. It's a very powerful tool. And it does things that human beings just simply can't do. I truly believe that it's the catalyst for the next wave of business process functionality, of new innovation. So it's definitely not something to stay away from. That being said, we don't know exactly how it can go wrong. And we know that there are examples where corrupted or bad bias data could influence it and have a bad outcome. And there are an infinite set of problems to go solve with AI, but there are ones that are a little dangerous to go pursue if you're not sure. And so our advice to customers today is, look, you do not need to build The Terminator to get advantage from AI. You can do something much simpler. In fact, in most enterprise context, we believe that the best path is go look at your existing business processes, where there is a decision that's made by a human being, and it's an inefficient decision. And if you can locate those points where a supply chain decision or an engineering decision or a testing decision is done by human beings poorly, and you can use machine intelligence to improve it by five or 10%, you will get a significant material impact on your business if you go after the right processes. At Dell we're doing a ton of AI and machine learning in our supply chain. Why is that important? Well, we happen to have the largest tech supply chain in the world. If we improve it by 1%, it's a gigantic impact on the company. And so our advice to people is you don't have to build man autonomous car. You don't have to build The Terminator. You can apply it much more tactically in spaces that are much safer. Even in the HR examples, we tell our HR people, "Hey, use it for things like performance management "and simplifying the processing of data. "Don't use it to hire a bot." That's a little dangerous right now. Because you might inadvertently introduce racism or sexism into that, and we still have some work to do there. So it's a very large surface area. Go where the safe areas are. It'll keep you busy for the next several years, improving your business in dramatic ways. And as we improve the technology for bias correction and management of AI systems and fault tolerance and simplicity, then go after the hard one. So this is a great one. Go after the easy stuff. You'll get a big benefit and you won't take the risk. >> You get the low hanging fruit learn, iterate through it. I'm glad you guys are using machine learning and AI in the supply chain. Make sure it's secure, big issue. I know you guys were on top of it and have a great operation there. John, great to have you on. John Roese, the Global Chief Technology Officer at Dell Technologies. Great to have you on. Take a minute to close out the last minute here. What's the most important story from Dell Technologies World this year? I know it's virtual. It's not face to face. But beyond that, what's the big takeaway in your mind, if you could share one point, what would it be for the folks watching? >> Yeah, I think the biggest point is something we talked about, which is we are in a period of digital transformation acceleration. COVID is bad, but it woke us up to the possibilities and the need for digital transformation. And so if you were on the fence or if you're moving slowly and now you have an opportunity to move fast. However, moving fast is hard if you try to do it by yourself. And so we've structured Dell, we've the six big areas we're focused on. They only have one purpose, it's to build the modern infrastructure platforms to enable digital transformation to happen faster. And my advice to people is, great. You're moving faster. Pick your partners well. Choose the people that you want to go on the journey with. And we think we're well positioned for that. And you will have much better progress if you take a broad view of the technology ecosystem and you've lightened up the appropriate partnerships with the people that can help you get there. And the outcome is a successful digital leader just is going to handle things like COVID and ease disruption better than a digital laggard. And we now have the data to prove that. So it's all about digital acceleration is the punchline. >> Well great to have you on. Great segment, great insight. And thank you for sharing the six pillars and the conversation. Super relevant on what's going on to create new business value, new opportunities for businesses and society. I'm John Furrier with theCUBE. Thanks for watching. (bright music)
SUMMARY :
Brought to you by Dell Technologies. We're not face to face this year, Oh, glad to be here. and challenges that need to be about the technology available to them that customers need to look at now and the infrastructure and iPhone 12 Pro, 5G is at the center But because of the breadth multicloud, the six pillars you laid out, And by the way, We know that there are going to be both And the one thing that, first of all, of the end to end path. I love the SASSY concept. that the way you designed for and some of the unintended And so our advice to John, great to have you on. Choose the people that you want to go Well great to have you on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
John Roese | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New Hampshire | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
50,000 points | QUANTITY | 0.99+ |
third phase | QUANTITY | 0.99+ |
iPhone 12 Pro | COMMERCIAL_ITEM | 0.99+ |
10% | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
iPhone 12 | COMMERCIAL_ITEM | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
one purpose | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
New York Stock Exchange | ORGANIZATION | 0.99+ |
first phase | QUANTITY | 0.99+ |
30 millisecond | QUANTITY | 0.99+ |
one point | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
one problem | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
First | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
thousands | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Snowflake | TITLE | 0.98+ |
today | DATE | 0.98+ |
one system | QUANTITY | 0.98+ |
single | QUANTITY | 0.97+ |
fourth | QUANTITY | 0.97+ |
six pillars | QUANTITY | 0.97+ |
Latitude 9,000 Series | COMMERCIAL_ITEM | 0.97+ |
one place | QUANTITY | 0.97+ |
about 70% | QUANTITY | 0.97+ |
five areas | QUANTITY | 0.97+ |
second | QUANTITY | 0.97+ |
billions of dollars | QUANTITY | 0.96+ |
Gelsinger | PERSON | 0.96+ |
four things | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
second inning | QUANTITY | 0.95+ |
Tanzu | ORGANIZATION | 0.95+ |
SEAGATE AI FINAL
>>C G technology is focused on data where we have long believed that data is in our DNA. We help maximize humanity's potential by delivering world class, precision engineered data solutions developed through sustainable and profitable partnerships. Included in our offerings are hard disk drives. As I'm sure many of you know, ah, hard drive consists of a slider also known as a drive head or transducer attached to a head gimbal assembly. I had stack assembly made up of multiple head gimbal assemblies and a drive enclosure with one or more platters, or just that the head stacked assembles into. And while the concept hasn't changed, hard drive technology has progressed well beyond the initial five megabytes, 500 quarter inch drives that Seagate first produced. And, I think 1983. We have just announced in 18 terabytes 3.5 inch drive with nine flatters on a single head stack assembly with dual head stack assemblies this calendar year, the complexity of these drives further than need to incorporate Edge analytics at operation sites, so G Edward stemming established the concept of continual improvement and everything that we do, especially in product development and operations and at the end of World War Two, he embarked on a mission with support from the US government to help Japan recover from its four time losses. He established the concept of continual improvement and statistical process control to the leaders of prominent organizations within Japan. And because of this, he was honored by the Japanese emperor with the second order of the sacred treasure for his teachings, the only non Japanese to receive this honor in hundreds of years. Japan's quality control is now world famous, as many of you may know, and based on my own experience and product development, it is clear that they made a major impact on Japan's recovery after the war at Sea Gate. The work that we've been doing and adopting new technologies has been our mantra at continual improvement. As part of this effort, we embarked on the adoption of new technologies in our global operations, which includes establishing machine learning and artificial intelligence at the edge and in doing so, continue to adopt our technical capabilities within data science and data engineering. >>So I'm a principal engineer and member of the Operations and Technology Advanced Analytics Group. We are a service organization for those organizations who need to make sense of the data that they have and in doing so, perhaps introduce a different way to create an analyzed new data. Making sense of the data that organizations have is a key aspect of the work that data scientist and engineers do. So I'm a project manager for an initiative adopting artificial intelligence methodologies for C Gate manufacturing, which is the reason why I'm talking to you today. I thought I'd start by first talking about what we do at Sea Gate and follow that with a brief on artificial intelligence and its role in manufacturing. And I'd like them to discuss how AI and machine Learning is being used at Sea Gate in developing Edge analytics, where Dr Enterprise and Cooper Netease automates deployment, scaling and management of container raised applications. So finally, I like to discuss where we are headed with this initiative and where Mirant is has a major role in case some of you are not conversant in machine learning, artificial intelligence and difference outside some definitions. To cite one source, machine learning is the scientific study of algorithms and statistical bottles without computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference Instead, thus, being seen as a subset of narrow artificial intelligence were analytics and decision making take place. The intent of machine learning is to use basic algorithms to perform different functions, such as classify images to type classified emails into spam and not spam, and predict weather. The idea and this is where the concept of narrow artificial intelligence comes in, is to make decisions of a preset type basically let a machine learn from itself. These types of machine learning includes supervised learning, unsupervised learning and reinforcement learning and in supervised learning. The system learns from previous examples that are provided, such as images of dogs that are labeled by type in unsupervised learning. The algorithms are left to themselves to find answers. For example, a Siris of images of dogs can be used to group them into categories by association that's color, length of coat, length of snout and so on. So in the last slide, I mentioned narrow a I a few times, and to explain it is common to describe in terms of two categories general and narrow or weak. So Many of us were first exposed to General Ai in popular science fiction movies like 2000 and One, A Space Odyssey and Terminator General Ai is a I that can successfully perform any intellectual task that a human can. And if you ask you Lawn Musk or Stephen Hawking, this is how they view the future with General Ai. If we're not careful on how it is implemented, so most of us hope that is more like this is friendly and helpful. Um, like Wally. The reality is that machines today are not only capable of weak or narrow, a I AI that is focused on a narrow, specific task like understanding, speech or finding objects and images. Alexa and Google Home are becoming very popular, and they can be found in many homes. Their narrow task is to recognize human speech and answer limited questions or perform simple tasks like raising the temperature in your home or ordering a pizza as long as you have already defined the order. Narrow. AI is also very useful for recognizing objects in images and even counting people as they go in and out of stores. As you can see in this example, so artificial intelligence supplies, machine learning analytics inference and other techniques which can be used to solve actual problems. The two examples here particle detection, an image anomaly detection have the potential to adopt edge analytics during the manufacturing process. Ah, common problem in clean rooms is spikes in particle count from particle detectors. With this application, we can provide context to particle events by monitoring the area around the machine and detecting when foreign objects like gloves enter areas where they should not. Image Anomaly detection historically has been accomplished at sea gate by operators in clean rooms, viewing each image one at a time for anomalies, creating models of various anomalies through machine learning. Methodologies can be used to run comparative analyses in a production environment where outliers can be detected through influence in an automated real Time analytics scenario. So anomaly detection is also frequently used in machine learning to find patterns or unusual events in our data. How do you know what you don't know? It's really what you ask, and the first step in anomaly detection is to use an algorithm to find patterns or relationships in your data. In this case, we're looking at hundreds of variables and finding relationships between them. We can then look at a subset of variables and determine how they are behaving in relation to each other. We use this baseline to define normal behavior and generate a model of it. In this case, we're building a model with three variables. We can then run this model against new data. Observations that do not fit in the model are defined as anomalies, and anomalies can be good or bad. It takes a subject matter expert to determine how to classify the anomalies on classify classification could be scrapped or okay to use. For example, the subject matter expert is assisting the machine to learn the rules. We then update the model with the classifications anomalies and start running again, and we can see that there are few that generate these models. Now. Secret factories generate hundreds of thousands of images every day. Many of these require human toe, look at them and make a decision. This is dull and steak prone work that is ideal for artificial intelligence. The initiative that I am project managing is intended to offer a solution that matches the continual increased complexity of the products we manufacture and that minimizes the need for manual inspection. The Edge Rx Smart manufacturing reference architecture er, is the initiative both how meat and I are working on and sorry to say that Hamid isn't here today. But as I said, you may have guessed. Our goal is to introduce early defect detection in every stage of our manufacturing process through a machine learning and real time analytics through inference. And in doing so, we will improve overall product quality, enjoy higher yields with lesser defects and produce higher Ma Jin's. Because this was entirely new. We established partnerships with H B within video and with Docker and Amaranthus two years ago to develop the capability that we now have as we deploy edge Rx to our operation sites in four continents from a hardware. Since H P. E. And in video has been an able partner in helping us develop an architecture that we have standardized on and on the software stack side doctor has been instrumental in helping us manage a very complex project with a steep learning curve for all concerned. To further clarify efforts to enable more a i N M l in factories. Theobald active was to determine an economical edge Compute that would access the latest AI NML technology using a standardized platform across all factories. This objective included providing an upgrade path that scales while minimizing disruption to existing factory systems and burden on factory information systems. Resource is the two parts to the compute solution are shown in the diagram, and the gateway device connects to see gates, existing factory information systems, architecture ER and does inference calculations. The second part is a training device for creating and updating models. All factories will need the Gateway device and the Compute Cluster on site, and to this day it remains to be seen if the training devices needed in other locations. But we do know that one devices capable of supporting multiple factories simultaneously there are also options for training on cloud based Resource is the stream storing appliance consists of a kubernetes cluster with GPU and CPU worker notes, as well as master notes and docker trusted registries. The GPU nodes are hardware based using H B E l 4000 edge lines, the balance our virtual machines and for machine learning. We've standardized on both the H B E. Apollo 6500 and the NVIDIA G X one, each with eight in video V 100 GP use. And, incidentally, the same technology enables augmented and virtual reality. Hardware is only one part of the equation. Our software stack consists of Docker Enterprise and Cooper Netease. As I mentioned previously, we've deployed these clusters at all of our operations sites with specific use. Case is planned for each site. Moran Tous has had a major impact on our ability to develop this capability by offering a stable platform in universal control plane that provides us, with the necessary metrics to determine the health of the Kubernetes cluster and the use of Dr Trusted Registry to maintain a secure repository for containers. And they have been an exceptional partner in our efforts to deploy clusters at multiple sites. At this point in our deployment efforts, we are on prem, but we are exploring cloud service options that include Miranda's next generation Docker enterprise offering that includes stack light in conjunction with multi cluster management. And to me, the concept of federation of multi cluster management is a requirement in our case because of the global nature of our business where our operation sites are on four continents. So Stack Light provides the hook of each cluster that banks multi cluster management and effective solution. Open source has been a major part of Project Athena, and there has been a debate about using Dr CE versus Dr Enterprise. And that decision was actually easy, given the advantages that Dr Enterprise would offer, especially during a nearly phase of development. Cooper Netease was a natural addition to the software stack and has been widely accepted. But we have also been a work to adopt such open source as rabbit and to messaging tensorflow and tensor rt, to name three good lab for developments and a number of others. As you see here, is well, and most of our programming programming has been in python. The results of our efforts so far have been excellent. We are seeing a six month return on investment from just one of seven clusters where the hardware and software cost approached close to $1 million. The performance on this cluster is now over three million images processed per day for their adoption has been growing, but the biggest challenge we've seen has been handling a steep learning curve. Installing and maintaining complex Cooper needs clusters in data centers that are not used to managing the unique aspect of clusters like this. And because of this, we have been considering adopting a control plane in the cloud with Kubernetes as the service supported by Miranda's. Even without considering, Kubernetes is a service. The concept of federation or multi cluster management has to be on her road map, especially considering the global nature of our company. Thank you.
SUMMARY :
at the end of World War Two, he embarked on a mission with support from the US government to help and the first step in anomaly detection is to use an algorithm to find patterns
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Seagate | ORGANIZATION | 0.99+ |
hundreds of years | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
six month | QUANTITY | 0.99+ |
World War Two | EVENT | 0.99+ |
C Gate | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Stephen Hawking | PERSON | 0.99+ |
Sea Gate | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Lawn Musk | PERSON | 0.99+ |
Terminator | TITLE | 0.99+ |
1983 | DATE | 0.99+ |
one part | QUANTITY | 0.99+ |
two examples | QUANTITY | 0.99+ |
A Space Odyssey | TITLE | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
3.5 inch | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
18 terabytes | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
NVIDIA | ORGANIZATION | 0.98+ |
over three million images | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
each site | QUANTITY | 0.98+ |
H B E. Apollo 6500 | COMMERCIAL_ITEM | 0.98+ |
each cluster | QUANTITY | 0.98+ |
each image | QUANTITY | 0.98+ |
one source | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
G X one | COMMERCIAL_ITEM | 0.98+ |
Cooper | PERSON | 0.98+ |
second order | QUANTITY | 0.98+ |
Japan | ORGANIZATION | 0.98+ |
Hamid | PERSON | 0.97+ |
Dr Enterprise | ORGANIZATION | 0.97+ |
Cooper Netease | ORGANIZATION | 0.97+ |
each | QUANTITY | 0.97+ |
One | TITLE | 0.97+ |
Theobald | PERSON | 0.97+ |
nine flatters | QUANTITY | 0.97+ |
one devices | QUANTITY | 0.96+ |
Siris | TITLE | 0.96+ |
hundreds of thousands of images | QUANTITY | 0.96+ |
Docker Enterprise | ORGANIZATION | 0.95+ |
Docker | ORGANIZATION | 0.95+ |
seven clusters | QUANTITY | 0.95+ |
two years ago | DATE | 0.95+ |
US government | ORGANIZATION | 0.95+ |
Mirant | ORGANIZATION | 0.95+ |
Operations and Technology Advanced Analytics Group | ORGANIZATION | 0.94+ |
four time losses | QUANTITY | 0.94+ |
Wally | PERSON | 0.94+ |
Japanese | OTHER | 0.93+ |
two categories | QUANTITY | 0.93+ |
H B E l 4000 | COMMERCIAL_ITEM | 0.9+ |
H B | ORGANIZATION | 0.9+ |
three variables | QUANTITY | 0.9+ |
General Ai | TITLE | 0.87+ |
G Edward | PERSON | 0.87+ |
Google Home | COMMERCIAL_ITEM | 0.87+ |
$1 million | QUANTITY | 0.85+ |
Miranda | ORGANIZATION | 0.85+ |
Sea Gate | LOCATION | 0.85+ |
Alexa | TITLE | 0.85+ |
500 quarter inch drives | QUANTITY | 0.84+ |
Kubernetes | TITLE | 0.83+ |
single head | QUANTITY | 0.83+ |
eight | QUANTITY | 0.83+ |
Dr | TITLE | 0.82+ |
variables | QUANTITY | 0.81+ |
this calendar year | DATE | 0.78+ |
H P. E. | ORGANIZATION | 0.78+ |
2000 | DATE | 0.73+ |
Project Athena | ORGANIZATION | 0.72+ |
Rx Smart | COMMERCIAL_ITEM | 0.69+ |
dual | QUANTITY | 0.68+ |
V 100 | COMMERCIAL_ITEM | 0.65+ |
close | QUANTITY | 0.65+ |
four continents | QUANTITY | 0.64+ |
GP | QUANTITY | 0.62+ |
Around theCUBE, Unpacking AI | Juniper NXTWORK 2019
>>from Las Vegas. It's the Q covering. Next work. 2019 America's Do You buy Juniper Networks? Come back already. Jeffrey here with the Cube were in Las Vegas at Caesar's at the Juniper. Next work event. About 1000 people kind of going over a lot of new cool things. 400 gigs. Who knew that was coming out of new information for me? But that's not what we're here today. We're here for the fourth installment of around the Cube unpacking. I were happy to have all the winners of the three previous rounds here at the same place. We don't have to do it over the phone s so we're happy to have him. Let's jump into it. So winner of Round one was Bob Friday. He is the VP and CTO at Missed the Juniper Company. Bob, Great to see you. Good to be back. Absolutely. All the way from Seattle. Sharna Parky. She's a VP applied scientist at Tech CEO could see Sharna and, uh, from Google. We know a lot of a I happen to Google. Rajan's chef. He is the V p ay ay >>product management on Google. Welcome. Thank you, Christy. Here >>All right, so let's jump into it. So just warm everybody up and we'll start with you. Bob, What are some When you're talking to someone at a cocktail party Friday night talking to your mom And they say, What is a I What >>do you >>give him? A Zen examples of where a eyes of packing our lives today? >>Well, I think we all know the examples of the south driving car, you know? Aye, aye. Starting to help our health care industry being diagnosed cancer for me. Personally, I had kind of a weird experience last week at a retail technology event where basically had these new digital mirrors doing facial recognition. Right? And basically, you start to have little mirrors were gonna be a skeevy start guessing. Hey, you have a beard, you have some glasses, and they start calling >>me old. So this is kind >>of very personal. I have a something for >>you, Camille, but eh? I go walking >>down a mall with a bunch of mirrors, calling me old. >>That's a little Illinois. Did it bring you out like a cane or a walker? You know, you start getting some advertising's >>that were like Okay, you guys, this is a little bit over the top. >>Alright, Charlotte, what about you? What's your favorite example? Share with people? >>Yeah, E think one of my favorite examples of a I is, um, kind of accessible in on your phone where the photos you take on an iPhone. The photos you put in Google photos, they're automatically detecting the faces and their labeling them for you. They're like, Here's selfies. Here's your family. Here's your Children. And you know, that's the most successful one of the ones that I think people don't really think about a lot or things like getting loan applications right. We actually have a I deciding whether or not we get loans. And that one is is probably the most interesting one to be right now. >>Roger. So I think the father's example is probably my favorite as well. And what's interesting to me is that really a I is actually not about the Yeah, it's about the user experience that you can create as a result of a I. What's cool about Google photos is that and my entire family uses Google photos and they don't even know actually that the underlying in some of the most powerful a I in the world. But what they know is they confined every picture of our kids on the beach whenever they whenever they want to. Or, you know, we had a great example where we were with our kids. Every time they like something in the store, we take a picture of it, Um, and we can look up toy and actually find everything that they've taken picture. >>It's interesting because I think most people don't even know the power that they have. Because if you search for beach in your Google photos or you search for, uh, I was looking for an old bug picture from my high school there it came right up until you kind of explore. You know, it's pretty tricky, Raja, you know, I think a lot of conversation about A They always focus the general purpose general purpose, general purpose machines and robots and computers. But people don't really talk about the applied A that's happening all around. Why do you think that? >>So it's a good question. There's there's a lot more talk about kind of general purpose, but the reality of where this has an impact right now is, though, are those specific use cases. And so, for example, things like personalizing customer interaction or, ah, spotting trends that did that you wouldn't have spotted for turning unstructured data like documents into structure data. That's where a eyes actually having an impact right now. And I think it really boils down to getting to the right use cases where a I right? >>Sharon, I want ask you. You know, there's a lot of conversation. Always has A I replace people or is it an augmentation for people? And we had Gary Kasparov on a couple years ago, and he talked about, you know, it was the combination if he plus the computer made the best chess player, but that quickly went away. Now the computer is actually better than Garry Kasparov. Plus the computer. How should people think about a I as an augmentation tool versus a replacement tool? And is it just gonna be specific to the application? And how do you kind of think about those? >>Yeah, I would say >>that any application where you're making life and death decisions where you're making financial decisions that disadvantage people anything where you know you've got u A. V s and you're deciding whether or not to actually dropped the bomb like you need a human in the loop. If you're trying to change the words that you are using to get a different group of people to apply for jobs, you need a human in the loop because it turns out that for the example of beach, you type sheep into your phone and you might get just a field, a green field and a I doesn't know that, uh, you know, if it's always seen sheep in a field that when the sheep aren't there, that that isn't a sheep like it doesn't have that kind of recognition to it. So anything were we making decisions about parole or financial? Anything like that needs to have human in the loop because those types of decisions are changing fundamentally the way we live. >>Great. So shift gears. The team are Jeff Saunders. Okay, team, your mind may have been the liquid on my bell, so I'll be more active on the bell. Sorry about that. Everyone's even. We're starting a zero again, so I want to shift gears and talk about data sets. Um Bob, you're up on stage. Demo ing some some of your technology, the Miss Technology and really, you know, it's interesting combination of data sets A I and its current form needs a lot of data again. Kind of the classic Chihuahua on blue buried and photos. You got to run a lot of them through. How do you think about data sets? In terms of having the right data in a complete data set to drive an algorithm >>E. I think we all know data sets with one The tipping points for a I to become more real right along with cloud computing storage. But data is really one of the key points of making a I really write my example on stage was wine, right? Great wine starts a great grape street. Aye, aye. Starts a great data for us personally. L s t M is an example in our networking space where we have data for the last three months from our customers and rule using the last 30 days really trained these l s t m algorithms to really get that tsunami detection the point where we don't have false positives. >>How much of the training is done. Once you once you've gone through the data a couple times in a just versus when you first started, you're not really sure how it's gonna shake out in the algorithm. >>Yeah. So in our case right now, right, training happens every night. So every night, we're basically retraining those models, basically, to be able to predict if there's gonna be an anomaly or network, you know? And this is really an example. Where you looking all these other cat image thinks this is where these neural networks there really were one of the transformational things that really moved a I into the reality calling. And it's starting to impact all our different energy. Whether it's text imaging in the networking world is an example where even a I and deep learnings ruling starting to impact our networking customers. >>Sure, I want to go to you. What do you do if you don't have a big data set? You don't have a lot of pictures of chihuahuas and blackberries, and I want to apply some machine intelligence to the problem. >>I mean, so you need to have the right data set. You know, Big is a relative term on, and it depends on what you're using it for, right? So you can have a massive amount of data that represents solar flares, and then you're trying to detect some anomaly, right? If you train and I what normal is based upon a massive amount of data and you don't have enough examples of that anomaly you're trying to detect, then it's never going to say there's an anomaly there, so you actually need to over sample. You have to create a population of data that allows you to detect images you can't say, Um oh, >>I'm going to reflect in my data set the percentage of black women >>in Seattle, which is something below 6% and say it's fair. It's not right. You have to be able thio over sample things that you need, and in some ways you can get this through surveys. You can get it through, um, actually going to different sources. But you have to boot, strap it in some way, and then you have to refresh it, because if you leave that data set static like Bob mentioned like you, people are changing the way they do attacks and networks all the time, and so you may have been able to find the one yesterday. But today it's a completely different ball game >>project to you, which comes first, the chicken or the egg. You start with the data, and I say this is a ripe opportunity to apply some. Aye, aye. Or do you have some May I objectives that you want to achieve? And I got to go out and find the >>data. So I actually think what starts where it starts is the business problem you're trying to solve. And then from there, you need to have the right data. What's interesting about this is that you can actually have starting points. And so, for example, there's techniques around transfer, learning where you're able to take an an algorithm that's already been trained on a bunch of data and training a little bit further with with your data on DSO, we've seen that such that people that may have, for example, only 100 images of something, but they could use a model that's trained on millions of images and only use those 100 thio create something that's actually quite accurate. >>So that's a great segue. Wait, give me a ring on now. And it's a great Segway into talking about applying on one algorithm that was built around one data set and then applying it to a different data set. Is that appropriate? Is that correct? Is air you risking all kinds of interesting problems by taking that and applying it here, especially in light of when people are gonna go to outweigh the marketplace, is because I've got a date. A scientist. I couldn't go get one in the marketplace and apply to my data. How should people be careful not to make >>a bad decision based on that? So I think it really depends. And it depends on the type of machine learning that you're doing and what type of data you're talking about. So, for example, with images, they're they're they're well known techniques to be able to do this, but with other things, there aren't really and so it really depends. But then the other inter, the other really important thing is that no matter what at the end, you need to test and generate based on your based on your data sets and on based on sample data to see if it's accurate or not, and then that's gonna guide everything. Ultimately, >>Sharon has got to go to you. You brought up something in the preliminary rounds and about open A I and kind of this. We can't have this black box where stuff goes into the algorithm. That stuff comes out and we're not sure what the result was. Sounds really important. Is that Is that even plausible? Is it feasible? This is crazy statistics, Crazy math. You talked about the business objective that someone's trying to achieve. I go to the data scientist. Here's my data. You're telling this is the output. How kind of where's the line between the Lehman and the business person and the hard core data science to bring together the knowledge of Here's what's making the algorithm say this. >>Yeah, there's a lot of names for this, whether it's explainable. Aye, aye. Or interpret a belay. I are opening the black box. Things like that. Um, the algorithms that you use determine whether or not they're inspect herbal. Um, and the deeper your neural network gets, the harder it is to inspect, actually. Right. So, to your point, every time you take an aye aye and you use it in a different scenario than what it was built for. For example, um, there is a police precinct in New York that had a facial recognition software, and, uh, victim said, Oh, it looked like this actor. This person looked like Bill Cosby or something like that, and you were never supposed to take an image of an actor and put it in there to find people that look like them. But that's how people were using it. So the Russians point yes, like it. You can transfer learning to other a eyes, but it's actually the humans that are using it in ways that are unintended that we have to be more careful about, right? Um, even if you're a, I is explainable, and somebody tries to use it in a way that it was never intended to be used. The risk is much higher >>now. I think maybe I had, You know, if you look at Marvis kind of what we're building for the networking community Ah, good examples. When Marvis tries to do estimate your throughput right, your Internet throughput. That's what we usually call decision tree algorithm. And that's a very interpretive algorithm. and we predict low throughput. We know how we got to that answer, right? We know what features God, is there? No. But when we're doing something like a NAMI detection, that's a neural network. That black box it tells us yes, there's a problem. There's some anomaly, but that doesn't know what caused the anomaly. But that's a case where we actually used neural networks, actually find the anomie, and then we're using something else to find the root cause, eh? So it really depends on the use case and where the night you're going to use an interpreter of model or a neural network which is more of a black box model. T tell her you've got a cat or you've got a problem >>somewhere. So, Bob, that's really interested. So can you not unpacking? Neural network is just the nature of the way that the communication and the data flows and the inferences are made that you can't go in and unpack it, that you have to have the >>separate kind of process too. Get to the root cause. >>Yeah, assigned is always hard to say. Never. But inherently s neural networks are very complicated. Saito set of weights, right? It's basically usually a supervised training model, and we're feeding a bunch of data and trying to train it to detect a certain features, sir, an output. But that is where they're powerful, right? And that's why they basically doing such good, Because they are mimicking the brain, right? That neural network is a very complex thing. Can't like your brain, right? We really don't understand how your brain works right now when you have a problem, it's really trialling there. We try to figure out >>right going right. So I want to stay with you, bought for a minute. So what about when you change what you're optimizing? Four? So you just said you're optimizing for throughput of the network. You're looking for problems. Now, let's just say it's, uh, into the end of the quarter. Some other reason we're not. You're changing your changing what you're optimizing for, Can you? You have to write separate algorithm. Can you have dynamic movement inside that algorithm? How do you approach a problem? Because you're not always optimizing for the same things, depending on the market conditions. >>Yeah, I mean, I think a good example, you know, again, with Marvis is really with what we call reinforcement. Learning right in reinforcement. Learning is a model we use for, like, radio resource management. And there were really trying to optimize for the user experience in trying to balance the reward, the models trying to reward whether or not we have a good balance between the network and the user. Right, that reward could be changed. So that algorithm is basically reinforcement. You can finally change hell that Algren works by changing the reward you give the algorithm >>great. Um, Rajan back to you. A couple of huge things that have come into into play in the marketplace and get your take one is open source, you know, kind of. What's the impact of open source generally on the availability, desire and more applications and then to cloud and soon to be edge? You know, the current next stop. How do you guys incorporate that opportunity? How does it change what you can do? How does it open up the lens of >>a I Yeah, I think open source is really important because I think one thing that's interesting about a I is that it's a very nascent field and the more that there's open source, the more that people could build on top of each other and be able to utilize what what others others have done. And it's similar to how we've seen open source impact operating systems, the Internet, things like things like that with Cloud. I think one of the big things with cloud is now you have the processing power and the ability to access lots of data to be able to t create these thes networks. And so the capacity for data and the capacity for compute is much higher. Edge is gonna be a very important thing, especially going into next few years. You're seeing Maur things incorporated on the edge and one exciting development is around Federated learning where you can train on the edge and then combine some of those aspects into a cloud side model. And so that I think will actually make EJ even more powerful. >>But it's got to be so dynamic, right? Because the fundamental problem used to always be the move, the computer, the data or the date of the computer. Well, now you've got on these edge devices. You've got Tanya data right sensor data all kinds of machining data. You've got potentially nasty hostile conditions. You're not in a nice, pristine data center where the environmental conditions are in the connective ity issues. So when you think about that problem yet, there's still great information. There you got latent issues. Some I might have to be processed close to home. How do you incorporate that age old thing of the speed of light to still break the break up? The problem to give you a step up? Well, we see a lot >>of customers do is they do a lot of training on the cloud, but then inference on the on the edge. And so that way they're able to create the model that they want. But then they get fast response time by moving the model to the edge. The other thing is that, like you said, lots of data is coming into the edge. So one way to do it is to efficiently move that to the cloud. But the other way to do is filter. And to try to figure out what data you want to send to the clouds that you can create the next days. >>Shawna, back to you let's shift gears into ethics. This pesky, pesky issue that's not not a technological issue at all, but right. We see it often, especially in tech. Just cause you should just cause you can doesn't mean that you should. Um so and this is not a stem issue, right? There's a lot of different things that happened. So how should people be thinking about ethics? How should they incorporate ethics? Um, how should they make sure that they've got kind of a, you know, a standard kind of overlooking kind of what they're doing? The decisions are being made. >>Yeah, One of the more approachable ways that I have found to explain this is with behavioral science methodologies. So ethics is a massive field of study, and not everyone shares the same ethics. However, if you try and bring it closer to behavior change because every product that we're building is seeking to change of behavior. We need to ask questions like, What is the gap between the person's intention and the goal we have for them? Would they choose that goal for themselves or not? If they wouldn't, then you have an ethical problem, right? And this this can be true of the intention, goal gap or the intention action up. We can see when we regulated for cigarettes. What? We can't just make it look cool without telling them what the cigarettes are doing to them, right so we can apply the same principles moving forward. And they're pretty accessible without having to know. Oh, this philosopher and that philosopher in this ethicist said these things, it can be pretty human. The challenge with this is that most people building these algorithms are not. They're not trained in this way of thinking, and especially when you're working at a start up right, you don't have access to massive teams of people to guide you down this journey, so you need to build it in from the beginning, and you need to be open and based upon principles. Um, and it's going to touch every component. It should touch your data, your algorithm, the people that you're using to build the product. If you only have white men building the product, you have a problem you need to pull in other people. Otherwise, there are just blind spots that you are not going to think of in order to still that product for a wider audience, but it seems like >>they were on such a razor sharp edge. Right with Coca Cola wants you to buy Coca Cola and they show ads for Coca Cola, and they appeal to your let's all sing together on the hillside and be one right. But it feels like with a I that that is now you can cheat. Right now you can use behavioral biases that are hardwired into my brain is a biological creature against me. And so where is where is the fine line between just trying to get you to buy Coke? Which somewhat argues Probably Justus Bad is Jule cause you get diabetes and all these other issues, but that's acceptable. But cigarettes are not. And now we're seeing this stuff on Facebook with, you know, they're coming out. So >>we know that this is that and Coke isn't just selling Coke anymore. They're also selling vitamin water so they're they're play isn't to have a single product that you can purchase, but it is to have a suite of products that if you weren't that coke, you can buy it. But if you want that vitamin water you can have that >>shouldn't get vitamin water and a smile that only comes with the coat. Five. You want to jump in? >>I think we're going to see ethics really break into two different discussions, right? I mean, ethics is already, like human behavior that you're already doing right, doing bad behavior, like discriminatory hiring, training, that behavior. And today I is gonna be wrong. It's wrong in the human world is gonna be wrong in the eye world. I think the other component to this ethics discussion is really round privacy and data. It's like that mirror example, right? No. Who gave that mirror the right to basically tell me I'm old and actually do something with that data right now. Is that my data? Or is that the mirrors data that basically recognized me and basically did something with it? Right. You know, that's the Facebook. For example. When I get the email, tell me, look at that picture and someone's take me in the pictures Like, where was that? Where did that come from? Right? >>What? I'm curious about to fall upon that as social norms change. We talked about it a little bit for we turn the cameras on, right? It used to be okay. Toe have no black people drinking out of a fountain or coming in the side door of a restaurant. Not that long ago, right in the 60. So if someone had built an algorithm, then that would have incorporated probably that social norm. But social norms change. So how should we, you know, kind of try to stay ahead of that or at least go back reflectively after the fact and say kind of back to the black box, That's no longer acceptable. We need to tweak this. I >>would have said in that example, that was wrong. 50 years ago. >>Okay, it was wrong. But if you ask somebody in Alabama, you know, at the University of Alabama, Matt Department who have been born Red born, bred in that culture as well, they probably would have not necessarily agreed. But so generally, though, again, assuming things change, how should we make sure to go back and make sure that we're not again carrying four things that are no longer the right thing to do? >>Well, I think I mean, as I said, I think you know what? What we know is wrong, you know is gonna be wrong in the eye world. I think the more subtle thing is when we start relying on these Aye. Aye. To make decisions like no shit in my car, hit the pedestrian or save my life. You know, those are tough decisions to let a machine take off or your balls decision. Right when we start letting the machines Or is it okay for Marvis to give this D I ps preference over other people, right? You know, those type of decisions are kind of the ethical decision, you know, whether right or wrong, the human world, I think the same thing will apply in the eye world. I do think it will start to see more regulation. Just like we see regulation happen in our hiring. No, that regulation is going to be applied into our A I >>right solutions. We're gonna come back to regulation a minute. But, Roger, I want to follow up with you in your earlier session. You you made an interesting comment. You said, you know, 10% is clearly, you know, good. 10% is clearly bad, but it's a soft, squishy middle at 80% that aren't necessarily super clear, good or bad. So how should people, you know, kind of make judgments in this this big gray area in the middle? >>Yeah, and I think that is the toughest part. And so the approach that we've taken is to set us set out a set of AI ai principles on DDE. What we did is actually wrote down seven things that we will that we think I should do and four things that we should not do that we will not do. And we now have to actually look at everything that we're doing against those Aye aye principles. And so part of that is coming up with that governance process because ultimately it boils down to doing this over and over, seeing lots of cases and figuring out what what you should do and so that governments process is something we're doing. But I think it's something that every company is going to need to do. >>Sharon, I want to come back to you, so we'll shift gears to talk a little bit about about law. We've all seen Zuckerberg, unfortunately for him has been, you know, stuck in these congressional hearings over and over and over again. A little bit of a deer in a headlight. You made an interesting comment on your prior show that he's almost like he's asking for regulation. You know, he stumbled into some really big Harry nasty areas that were never necessarily intended when they launched Facebook out of his dorm room many, many moons ago. So what is the role of the law? Because the other thing that we've seen, unfortunately, a lot of those hearings is a lot of our elected officials are way, way, way behind there, still printing their e mails, right? So what is the role of the law? How should we think about it? What shall we What should we invite from fromthe law to help sort some of this stuff out? >>I think as an individual, right, I would like for each company not to make up their own set of principles. I would like to have a shared set of principles that were following the challenge. Right, is that with between governments, that's impossible. China is never gonna come up with same regulations that we will. They have a different privacy standards than we D'oh. Um, but we are seeing locally like the state of Washington has created a future of work task force. And they're coming into the private sector and asking companies like text you and like Google and Microsoft to actually advise them on what should we be regulating? We don't know. We're not the technologists, but they know how to regulate. And they know how to move policies through the government. What will find us if we don't advise regulators on what we should be regulating? They're going to regulate it in some way, just like they regulated the tobacco industry. Just like they regulated. Sort of, um, monopolies that tech is big enough. Now there is enough money in it now that it will be regularly. So we need to start advising them on what we should regulate because just like Mark, he said. While everyone else was doing it, my competitors were doing it. So if you >>don't want me to do it, make us all stop. What >>can I do? A negative bell and that would not for you, but for Mark's responsibly. That's crazy. So So bob old man at the mall. It's actually a little bit more codified right, There's GDP are which came through May of last year and now the newness to California Extra Gatorade, California Consumer Protection Act, which goes into effect January 1. And you know it's interesting is that the hardest part of the implementation of that I think I haven't implemented it is the right to be for gotten because, as we all know, computers, air, really good recording information and cloud. It's recorded everywhere. There's no there there. So when these types of regulations, how does that impact? Aye, aye, because if I've got an algorithm built on a data set in in person, you know, item number 472 decides they want to be forgotten How that too I deal with that. >>Well, I mean, I think with Facebook, I can see that as I think. I suspect Mark knows what's right and wrong. He's just kicking ball down tires like >>I want you guys. >>It's your problem, you know. Please tell me what to do. I see a ice kind of like any other new technology, you know, it could be abused and used in the wrong waste. I think legally we have a constitution that protects our rights. And I think we're going to see the lawyers treat a I just like any other constitutional things and people who are building products using a I just like me build medical products or other products and actually harmful people. You're gonna have to make sure that you're a I product does not harm people. You're a product does not include no promote discriminatory results. So I >>think we're going >>to see our constitutional thing is going applied A I just like we've seen other technologies work. >>And it's gonna create jobs because of that, right? Because >>it will be a whole new set of lawyers >>the holdings of lawyers and testers, even because otherwise of an individual company is saying. But we tested. It >>works. Trust us. Like, how are you gonna get the independent third party verification of that? So we're gonna start to see a whole terrorist proliferation of that type of fields that never had to exist before. >>Yeah, one of my favorite doctor room. A child. Grief from a center. If you don't follow her on Twitter Follower. She's fantastic and a great lady. So I want to stick with you for a minute, Bob, because the next topic is autonomous. And Rahman up on the keynote this morning, talked about missed and and really, this kind of shifting workload of fixing things into an autonomous set up where the system now is, is finding problems, diagnosing problems, fixing problems up to, I think, he said, even generating return authorizations for broken gear, which is amazing. But autonomy opens up all kinds of crazy, scary things. Robert Gates, we interviewed said, You know, the only guns that are that are autonomous in the entire U. S. Military are the ones on the border of North Korea. Every single other one has to run through a person when you think about autonomy and when you can actually grant this this a I the autonomy of the agency toe act. What are some of the things to think about in the word of the things to keep from just doing something bad, really, really fast and efficiently? >>Yeah. I mean, I think that what we discussed, right? I mean, I think Pakal purposes we're far, you know, there is a tipping point. I think eventually we will get to the CP 30 Terminator day where we actually build something is on par with the human. But for the purposes right now, we're really looking at tools that we're going to help businesses, doctors, self driving cars and those tools are gonna be used by our customers to basically allow them to do more productive things with their time. You know, whether it's doctor that's using a tool to actually use a I to predict help bank better predictions. They're still gonna be a human involved, you know, And what Romney talked about this morning and networking is really allowing our I T customers focus more on their business problems where they don't have to spend their time finding bad hard were bad software and making better experiences for the people. They're actually trying to serve >>right, trying to get your take on on autonomy because because it's a different level of trust that we're giving to the machine when we actually let it do things based on its own. But >>there's there's a lot that goes into this decision of whether or not to allow autonomy. There's an example I read. There's a book that just came out. Oh, what's the title? You look like a thing. And I love you. It was a book named by an A I, um if you want to learn a lot about a I, um and you don't know much about it, Get it? It's really funny. Um, so in there there is in China. Ah, factory where the Aye Aye. Is optimizing um, output of cockroaches now they just They want more cockroaches now. Why do they want that? They want to grind them up and put them in a lotion. It's one of their secret ingredients now. It depends on what parameters you allow that I to change, right? If you decide Thio let the way I flood the container, and then the cockroaches get out through the vents and then they get to the kitchen to get food, and then they reproduce the parameters in which you let them be autonomous. Over is the challenge. So when we're working with very narrow Ai ai, when use hell the Aye. Aye. You can change these three things and you can't just change anything. Then it's a lot easier to make that autonomous decision. Um and then the last part of it is that you want to know what is the results of a negative outcome, right? There was the result of a positive outcome. And are those results something that we can take actually? >>Right, Right. Roger, don't give you the last word on the time. Because kind of the next order of step is where that machines actually write their own algorithms, right? They start to write their own code, so they kind of take this next order of thought and agency, if you will. How do you guys think about that? You guys are way out ahead in the space, you have huge data set. You got great technology. Got tensorflow. When will the machines start writing their own A their own out rhythms? Well, and actually >>it's already starting there that, you know, for example, we have we have a product called Google Cloud. Ottawa. Mel Village basically takes in a data set, and then we find the best model to be able to match that data set. And so things like that that that are there already, but it's still very nascent. There's a lot more than that that can happen. And I think ultimately with with how it's used I think part of it is you have to start. Always look at the downside of automation. And what is what is the downside of a bad decision, whether it's the wrong algorithm that you create or a bad decision in that model? And so if the downside is really big, that's where you need to start to apply Human in the loop. And so, for example, in medicine. Hey, I could do amazing things to detect diseases, but you would want a doctor in the loop to be able to actually diagnose. And so you need tohave have that place in many situations to make sure that it's being applied well. >>But is that just today? Or is that tomorrow? Because, you know, with with exponential growth and and as fast as these things are growing, will there be a day where you don't necessarily need maybe need the doctor to communicate the news? Maybe there's some second order impacts in terms of how you deal with the family and, you know, kind of pros and cons of treatment options that are more emotional than necessarily mechanical, because it seems like eventually that the doctor has a role. But it isn't necessarily in accurately diagnosing a problem. >>I think >>I think for some things, absolutely over time the algorithms will get better and better, and you can rely on them and trust them more and more. But again, I think you have to look at the downside consequence that if there's a bad decision, what happens and how is that compared to what happens today? And so that's really where, where that is. So, for example, self driving cars, we will get to the point where cars are driving by themselves. There will be accidents, but the accident rate is gonna be much lower than what's there with humans today, and so that will get there. But it will take time. >>And there was a day when will be illegal for you to drive. You have manslaughter, right? >>I I believe absolutely there will be in and and I don't think it's that far off. Actually, >>wait for the day when I have my car take me up to Northern California with me. Sleepy. I've only lived that long. >>That's right. And work while you're while you're sleeping, right? Well, I want to thank everybody Aton for being on this panel. This has been super fun and these air really big issues. So I want to give you the final word will just give everyone kind of a final say and I just want to throw out their Mars law. People talk about Moore's law all the time. But tomorrow's law, which Gardner stolen made into the hype cycle, you know, is that we tend to overestimate in the short term, which is why you get the hype cycle and we turn. Tend to underestimate, in the long term the impacts of technology. So I just want it is you look forward in the future won't put a year number on it, you know, kind of. How do you see this rolling out? What do you excited about? What are you scared about? What should we be thinking about? We'll start with you, Bob. >>Yeah, you know, for me and, you know, the day of the terminus Heathrow. I don't know if it's 100 years or 1000 years. That day is coming. We will eventually build something that's in part of the human. I think the mission about the book, you know, you look like a thing and I love >>you. >>Type of thing that was written by someone who tried to train a I to basically pick up lines. Right? Cheesy pickup lines. Yeah, I'm not for sure. I'm gonna trust a I to help me in my pickup lines yet. You know I love you. Look at your thing. I love you. I don't know if they work. >>Yeah, but who would? Who would have guessed online dating is is what it is if you had asked, you know, 15 years ago. But I >>think yes, I think overall, yes, we will see the Terminator Cp through It was probably not in our lifetime, but it is in the future somewhere. A. I is definitely gonna be on par with the Internet cell phone, radio. It's gonna be a technology that's gonna be accelerating if you look where technology's been over last. Is this amazing to watch how fast things have changed in our lifetime alone, right? Yeah, we're just on this curve of technology accelerations. This in the >>exponential curves China. >>Yeah, I think the thing I'm most excited about for a I right now is the addition of creativity to a lot of our jobs. So ah, lot of we build an augmented writing product. And what we do is we look at the words that have happened in the world and their outcomes. And we tell you what words have impacted people in the past. Now, with that information, when you augment humans in that way, they get to be more creative. They get to use language that have never been used before. To communicate an idea. You can do this with any field you can do with composition of music. You can if you can have access as an individual, thio the data of a bunch of cultures the way that we evolved can change. So I'm most excited about that. I think I'm most concerned currently about the products that we're building Thio Give a I to people that don't understand how to use it or how to make sure they're making an ethical decision. So it is extremely easy right now to go on the Internet to build a model on a data set. And I'm not a specialist in data, right? And so I have no idea if I'm adding bias in or not, um and so it's It's an interesting time because we're in that middle area. Um, and >>it's getting loud, all right, Roger will throw with you before we have to cut out, or we're not gonna be able to hear anything. So I actually start every presentation out with a picture of the Mosaic browser, because what's interesting is I think that's where >>a eyes today compared to kind of weather when the Internet was around 1994 >>were just starting to see how a I can actually impact the average person. As a result, there's a lot of hype, but what I'm actually finding is that 70% of the company's I talked to the first question is, Why should I be using this? And what benefit does it give me? Why 70% ask you why? Yeah, and and what's interesting with that is that I think people are still trying to figure out what is this stuff good for? But to your point about the long >>run, and we underestimate the longer I think that every company out there and every product will be fundamentally transformed by eye over the course of the next decade, and it's actually gonna have a bigger impact on the Internet itself. And so that's really what we have to look forward to. >>All right again. Thank you everybody for participating. There was a ton of fun. Hope you had fun. And I look at the score sheet here. We've got Bob coming in and the bronze at 15 points. Rajan, it's 17 in our gold medal winner for the silver Bell. Is Sharna at 20 points. Again. Thank you. Uh, thank you so much and look forward to our next conversation. Thank Jeffrey Ake signing out from Caesar's Juniper. Next word unpacking. I Thanks for watching.
SUMMARY :
We don't have to do it over the phone s so we're happy to have him. Thank you, Christy. So just warm everybody up and we'll start with you. Well, I think we all know the examples of the south driving car, you know? So this is kind I have a something for You know, you start getting some advertising's And that one is is probably the most interesting one to be right now. it's about the user experience that you can create as a result of a I. Raja, you know, I think a lot of conversation about A They always focus the general purpose general purpose, And I think it really boils down to getting to the right use cases where a I right? And how do you kind of think about those? the example of beach, you type sheep into your phone and you might get just a field, the Miss Technology and really, you know, it's interesting combination of data sets A I E. I think we all know data sets with one The tipping points for a I to become more real right along with cloud in a just versus when you first started, you're not really sure how it's gonna shake out in the algorithm. models, basically, to be able to predict if there's gonna be an anomaly or network, you know? What do you do if you don't have a big data set? I mean, so you need to have the right data set. You have to be able thio over sample things that you need, Or do you have some May I objectives that you want is that you can actually have starting points. I couldn't go get one in the marketplace and apply to my data. the end, you need to test and generate based on your based on your data sets the business person and the hard core data science to bring together the knowledge of Here's what's making Um, the algorithms that you use I think maybe I had, You know, if you look at Marvis kind of what we're building for the networking community Ah, that you can't go in and unpack it, that you have to have the Get to the root cause. Yeah, assigned is always hard to say. So what about when you change what you're optimizing? You can finally change hell that Algren works by changing the reward you give the algorithm How does it change what you can do? on the edge and one exciting development is around Federated learning where you can train The problem to give you a step up? And to try to figure out what data you want to send to Shawna, back to you let's shift gears into ethics. so you need to build it in from the beginning, and you need to be open and based upon principles. But it feels like with a I that that is now you can cheat. but it is to have a suite of products that if you weren't that coke, you can buy it. You want to jump in? No. Who gave that mirror the right to basically tell me I'm old and actually do something with that data right now. So how should we, you know, kind of try to stay ahead of that or at least go back reflectively after the fact would have said in that example, that was wrong. But if you ask somebody in Alabama, What we know is wrong, you know is gonna be wrong So how should people, you know, kind of make judgments in this this big gray and over, seeing lots of cases and figuring out what what you should do and We've all seen Zuckerberg, unfortunately for him has been, you know, stuck in these congressional hearings We're not the technologists, but they know how to regulate. don't want me to do it, make us all stop. I haven't implemented it is the right to be for gotten because, as we all know, computers, Well, I mean, I think with Facebook, I can see that as I think. you know, it could be abused and used in the wrong waste. to see our constitutional thing is going applied A I just like we've seen other technologies the holdings of lawyers and testers, even because otherwise of an individual company is Like, how are you gonna get the independent third party verification of that? Every single other one has to run through a person when you think about autonomy and They're still gonna be a human involved, you know, giving to the machine when we actually let it do things based on its own. It depends on what parameters you allow that I to change, right? How do you guys think about that? And what is what is the downside of a bad decision, whether it's the wrong algorithm that you create as fast as these things are growing, will there be a day where you don't necessarily need maybe need the doctor But again, I think you have to look at the downside And there was a day when will be illegal for you to drive. I I believe absolutely there will be in and and I don't think it's that far off. I've only lived that long. look forward in the future won't put a year number on it, you know, kind of. I think the mission about the book, you know, you look like a thing and I love I don't know if they work. you know, 15 years ago. It's gonna be a technology that's gonna be accelerating if you look where technology's And we tell you what words have impacted people in the past. it's getting loud, all right, Roger will throw with you before we have to cut out, Why 70% ask you why? have a bigger impact on the Internet itself. And I look at the score sheet here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Saunders | PERSON | 0.99+ |
Sharon | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Roger | PERSON | 0.99+ |
Alabama | LOCATION | 0.99+ |
Mark | PERSON | 0.99+ |
Sharna Parky | PERSON | 0.99+ |
Robert Gates | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Garry Kasparov | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
January 1 | DATE | 0.99+ |
Gary Kasparov | PERSON | 0.99+ |
15 points | QUANTITY | 0.99+ |
Sharna | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
20 points | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
Jeffrey Ake | PERSON | 0.99+ |
400 gigs | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
Charlotte | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Rahman | PERSON | 0.99+ |
Christy | PERSON | 0.99+ |
Rajan | PERSON | 0.99+ |
Bill Cosby | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
California Extra Gatorade | TITLE | 0.99+ |
May | DATE | 0.99+ |
70% | QUANTITY | 0.99+ |
100 years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
tomorrow | DATE | 0.99+ |
Northern California | LOCATION | 0.99+ |
Shawna | PERSON | 0.99+ |
first question | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Zuckerberg | PERSON | 0.99+ |
17 | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
last week | DATE | 0.99+ |
today | DATE | 0.99+ |
Coca Cola | ORGANIZATION | 0.99+ |
Marvis | ORGANIZATION | 0.99+ |
Friday night | DATE | 0.99+ |
Moore | PERSON | 0.99+ |
Illinois | LOCATION | 0.99+ |
Five | QUANTITY | 0.99+ |
1000 years | QUANTITY | 0.99+ |
Ottawa | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Gardner | PERSON | 0.99+ |
100 | QUANTITY | 0.98+ |
fourth installment | QUANTITY | 0.98+ |
each company | QUANTITY | 0.98+ |
millions of images | QUANTITY | 0.98+ |
University of Alabama | ORGANIZATION | 0.98+ |
15 years ago | DATE | 0.98+ |
three previous rounds | QUANTITY | 0.98+ |
10% | QUANTITY | 0.98+ |
100 images | QUANTITY | 0.98+ |
one algorithm | QUANTITY | 0.98+ |
Washington | LOCATION | 0.98+ |
Romney | PERSON | 0.98+ |
50 years ago | DATE | 0.97+ |
single product | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
next decade | DATE | 0.96+ |
Around theCUBE, Unpacking AI Panel, Part 2 | CUBEConversation, October 2019
(upbeat music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Welcome everyone to this special CUBE Conversation Around the CUBE segment, Unpacking AI, number two, sponsored by Juniper Networks. We've got a great lineup here to go around the CUBE and unpack AI. We have Ken Jennings, all-time Jeopardy champion with us. Celebrity, great story there, we'll dig into that. John Hinson, director of AI at Evotek and Charna Parkey, who's the applied scientist at Textio. Thanks for joining us here for Around the CUBE Unpacking AI, appreciate it. First question I want to get to, Ken, you're notable for being beaten by a machine on Jeopardy. Everyone knows that story, but it really brings out the question of AI and the role AI is playing in society around obsolescence. We've been hearing gloom and doom around AI replacing people's jobs, and it's not really that way. What's your take on AI and replacing people's jobs? >> You know, I'm not an economist, so I can't speak to how easy it's going to be to retrain and re-skill tens of millions of people once these clerical and food prep and driving and whatever jobs go away, but I can definitely speak to the personal feeling of being in that situation, kind of watching the machine take your job on the assembly line and realizing that the thing you thought made you special no longer exists. If IBM throws enough money at it, your skill essentially is now obsolete. And it was kind of a disconcerting feeling. I think that what people need is to feel like they matter, and that went away for me very quickly when I realized that a black rectangle can now beat me at a game show. >> Okay John, what's your take on AI replacing jobs? What's your view on this? >> I think, look, we're all going to have to adapt. There's a lot of changes coming. There's changes coming socially, economically, politically. I think it's a disservice to us all to get to too indulgent around the idea that these things are going to change. We have to absorb these things, we have to be really smart about how we approach them. We have to be very open-minded about how these things are going to actually change us all. But ultimately, I think it's going to be positive at the end of the day. It's definitely going to be a little rough for a couple of years as we make all these adjustments, but I think what AI brings to the table is heads above kind of where we are today. >> Charna, your take around this, because the role of humans versus machines are pretty significant, they help each other. But is AI going to dominate over humans? >> Yeah, absolutely. I think there's a thing that we see over and over again in every bubble and collapse where, you know, in the automotive industry we certainly saw a bunch of jobs were lost, but a bunch of jobs were gained. And so we're just now actually getting into the phase where people are realizing that AI isn't just replacement, it has to be augmentation, right? We can't simply use images to replace recognition of people, we can't just use black box to give our FICO credit scores, it has to be inspectable. So there's a new field coming up now called explainable AI that actually is where we're moving towards and it's actually going to help society and create jobs. >> All right so let's stay on that next point for the next round, explainable AI. This points to a golden age. There's a debate around are we in a bubble or a golden age. A lot of people are negative right now on tech. You can see all the tech backlash. Amazon, the big tech companies like Apple and Facebook, there's a huge backlash around this so-called tech for society. Is this an indicator of a golden age coming? >> I think so, absolutely. We can take two examples of this. One would be where, you remember when Amazon built a hiring algorithm based upon their own resume data and they found that it was discriminating against women because they had only had men apply for it. Now with Textio we're building augmented writing across the audience and not from a single company and so companies like Johnson and Johnson are increasing the pipeline by more than nine percent which converts to 90,000 more women applying for their jobs. And so part of the difference there is one is explainable, one isn't, and one is using the right data set representing the audience that is consuming it and not a single company's hiring. So I think we're absolutely headed into more of a golden age, and I think these are some of the signs that people are starting to use it in the right way. >> John, what's your take? Obviously golden age doesn't look that to us right now. You see Facebook approving lies as ads, Twitter banning political ads. AI was supposed to solve all these problems. Is there light at the end of this dark tunnel we're on? >> Yeah, golden age for sure. I'm definitely a big believer in that. I think there's a new era amongst us on how we handle data in general. I think the most important thing we have here though is education around what this stuff is, how it works, how it's affecting our lives individually and at the corporate level. This is a new era of informing and augmenting literally everything we do. I see nothing but positives coming out of this. We have to be obviously very careful with our approaching all the biases that already exist today that are only going to be magnified with these types of algorithms at mass scale. But ultimately if we can get over that hurdle, which I believe collectively we all need to do together, I think we'd live in much better, less wasteful world just by approaching the data that's already at hand. >> Ken, what's your take on this? It's like a daily double question. Is it going to be a golden age? >> Laughs >> It's going to come sooner or later. We have to have catastrophe before, we have to have reality hit us in the face before we realize that tech is good, and shaping it? It's pretty ugly right now in some of the situations out there, especially in the political scene with the election in the US. You're seeing some negative things happening. What's your take on this? >> I'm much more skeptical than John and Charna. I feel like that kind of just blinkered, it's going to be great, is something you have to actually be in the tech industry and hearing all day to actually believe. I remember seeing kind of lay-person's exposure to Watson when Watson was on Jeopardy and hearing the questions reporters would ask and seeing the memes that would appear, and everyone's immediate reaction just to something as innocuous as a AI algorithm playing on a game show was to ask, is this Skynet from Terminator 2? Is this the computer from The Matrix? Is this HAL pushing us out of the airlock? Everybody immediately first goes to the tech is going to kill us. That's like everybody's first reaction, and it's weird. I don't know, you might say it's just because Hollywood has trained us to expect that plot development, but I almost think it's the other way around. Like that's a story we tell because we're deeply worried about our own meaning and obsolescence when we see how little these skills might be valued in 10, 20, 30 years. >> I can't tell you how much, by the way, Star Trek, Star Wars and Terminators probably affected the nomenclature of the technology. Everyone references Skynet. Oh my God, we're going to be taken over and killed by aliens and machines. This is a real fear. I thinks it's an initial reaction. You felt that Ken, so I've got to ask you, where do you think the crossover point is for people to internalize the benefits of say, AI for instance? Because people will say hey, look back at life before the iPhone, look at life before these tools were out there. Some will say society's gotten better, but yet there's this surveillance culture, things... And on and on. So what do you guys think the crossover point is for the reaction to change from oh my God, it's Skynet, gloom and doom to this actually could be good? >> It's incredibly tricky because as we've seen, the perception of AI both in and out of the industry changes as AI advances. As soon as machine learning can actually do a task, there's a tendency to say there's this no true Scotsman problem where we say well, that clearly can't be AI because I see how the trick worked. And yeah, humans lose at chess now. So when these small advances happen, the reaction is often oh, that's not really AI. And by the same token, it's not a game-changer when your email client can start to auto-complete your emails. That's a minor convenience to you. But you don't think oh, maybe Skynet is good. I really do think it's going to have to be, maybe the inflection point is when it starts to become so disruptive that actually public policy has to change. So we get serious about >> And public policy has started changing. >> whatever their reactions are. >> Charna, your thoughts. >> The public policy has started changing though. We just saw, I think it was in September, where California banned the use of AI in the body cameras, both real-time and after the fact. So I think that's part of the pivot point that we're actually seeing is that public policy is changing.` The state of Washington currently has a task force for AI who's making a set of recommendations for policy starting in December. But I think part of what we're missing is that we don't have enough digital natives in office to even attempt to, to your point Ken, predict what we're even going to be able to do with it, right? There is this fear because of misunderstanding, but we also don't have a respect of our political climate right now by a lot of our digital natives, and they need to be there to be making this policy. >> John, weigh in on this because you're director of AI, you're seeing positive, you have to deal with the uncertainty as well, the growth of machine learning. And just this week Google announced more TensorFlow for everybody. You're seeing Open Source. So there's a tech push, almost a democratization, going on with AI. So I think this crossover point might be sooner in front of us than people think. What's your thoughts? >> Yeah it's here right now. All these things can be essentially put into an environment. You can see these into products, or making business decisions or political decisions. These are all available right now. They're available today and its within 10 to 15 lines of code. It's all about the data sets, so you have to be really good stewards of the data that you're using to train your models. But I think the most important thing, back to the Skynet and all this science-fiction side, we have to collectively start telling the right stories. We need better stories than just this robots are going to take us over and destroy all of our jobs. I think more interesting stories really revolve around, what about public defenders who can have this informant augmentation algorithm that's going to help them get their job done? What about tailor-made medicine that's going to tell me exactly what the conditions are based off of a particular treatment plan instead of guessing? What about tailored education that's going to look at all of my strengths and weaknesses and present a plan for me? These are things that AI can do. Charna's exactly right, where if we don't get this into the right political atmosphere that's helping balance the capitalist side with the social side, we're going to be in trouble. So that's got to be embedded in every layer of enterprise as well as society in general. It's here, it's now, and it's real. >> Ken, before we move on to the ethics question, I want to get your thoughts on this because we have an Alexa at home. We had an Alexa at home; my wife made me get rid of it. We had an Apple device, what they're called... the Home pods, that's gone. I bought a Portal from Facebook because I always buy the earliest stuff, that's gone. We don't want listening devices in our house because in order to get that AI, you have to give up listening, and this has been an issue. What do you have to give to get? This has been a big question. What's your thoughts on all this? >> I was at an Amazon event where they were trumpeting how no technology had ever caught on faster than these personal digital assistants, and yet every time I'm in a use case, a household that's trying to use them, something goes terribly wrong. My friend had to rename his because the neighbor kids kept telling Alexa to do awful things. He renamed it computer, and now every time we use the word computer, the wall tells us something we don't want to know. >> (laughs) >> This is just anecdata, but maybe it speaks to something deeper, the fact that we don't necessarily like the feeling of being surveilled. IBM was always trying to push Watson as the star Trek computer that helpfully tells you exactly what you need to know in the right moment, but that's got downsides too. I feel like we're going to, if nothing else, we may start to value individual learning and knowledge less when we feel like a voice from the ceiling can deliver unto us the fact that we need. I think decision-making might suffer in that kind of a world. >> All right, this brings up ethics because I bring up the Amazon and the voice stuff because this is the new interface people want to have with machines. I didn't mention phones, Androids and Apple, they need to listen in order to make decisions. This brings up the ethics question around who sets the laws, what society should do about this, because we want the benefits of AI. John, you point out some of them. You got to give to get. Where are we on ethics? What's the opinion, what's the current view on this? John, we'll start with you on your ethics view on what needs to change now to move the ball faster. >> Data is gold. Data is gold at an exponential rate when you're talking about AI. There should be no situation where these companies get to collect data at no cost or no benefit to the end consumer. So ultimately we should have the option to opt out of any of these products and any of this type of surveillance wherever we can. Public safety is a little bit different situation, but on the commercial side, there is a lot of more expensive and even more difficult ways to train these models with a data set that isn't just basically grabbing everything our of your personal lives. I think that should be an option for consumers and that's one of those ethical check-marks. Again, ethics in general, the way that data's trained, the way that data's handled, the way models actually work, it has to be a primary reason for and approach of how you actually go about developing and delivering AI. That said, we cannot get over-indulgent in the fact that we can't do it because we're so fearful of the ethical outcomes. We have to find some middle ground and we have to find it quickly and collectively. >> Charna, what's your take on this? Ethics is super important to set the agenda for society to take advantage of all this. >> Yeah. I think we've got three ethical components here. We certainly have, as John mentioned, the data sets. However, it's also what behavior we're trying to change. So I believe the industry could benefit from a lot more behavioral science, so that we can understand whether or not the algorithms that we're building are changing behaviors that we actually want to change, right? And if we aren't, that's unethical. There is an entire field of ethics that needs to start getting put into our companies. We need an ethics board internally. A few companies are doing this already actually. I know a lot of the military companies do. I used to be in the defense industry, and so they've got a board of ethics before you can do things. The challenge is also though that as we're democratizing the algorithms themselves, people don't understand that you can't just get a set of data that represents the population. So this is true of image processing, where if we only used 100 images of a black woman, and we used 1,000 images of a white man because that was the distribution in our population, and then the algorithm could not detect the difference between skin tones for people of color, then we end up with situations where we end up in a police state where you put in an image of one black woman and it looks like ten of them and you can't distinguish between them. And yet, the confidence rate for the humans are actually higher, because they now have a machine backing their decision. And so they stop questioning, to your point, Ken, about what is the decision I'm making, they're like I'm so confident, this data told me so. And so there's a little bit of you need some expert in the loop and you also can't just have experts, because then you end up with Cambridge Analytica and all of the political things that happened there, not just in the US, but across 200 different elections and 30 different countries. And we are upset because it happened in the US, but this has been happening for years. So its just this ethical challenge of behavior change. It's not even AI and we do it all the time. Its why the cigarette industry is regulated (laughs). >> So Ken, what's your take on this? Obviously because society needs to have ethics. Who runs that? Companies? The law-makers? Someone's got to be responsible. >> I'm honestly a little pessimistic the general public will even demand this the way we're maybe hoping that they will. When I think about an example like Facebook, people just being able to, being willing to give away insane amounts of data through social media companies for the smallest of benefits: keeping in touch with people from high school they don't like. I mean, it really shows how little we value not being a product in this kind of situation. But I would like to see this kind of ethical decisions being made at the company-level. I feel like Google kind of surreptitiously moved away from it's little don't be evil mantra with the subtext that eh, maybe we'll be a little evil now. It just reminds me of Manhattan Project era thinking, where you could've gone to any of these nuclear scientists and said you're working on a real interesting puzzle here, it might advance the field, but like 200,000 civilians might die this summer. And I feel like they would've just looked at you and thought that's not really my bailiwick. I'm just trying to solve the fission problem. I would like to see these 10 companies actually having that kind of thinking internally. Not being so busy thinking if they can do something that they don't wonder if they should. >> That's a great point. This brings up the point of who is responsible. Almost as if who is less evil than the other person? Google, they don't do evil, but they're less evil than Amazon and Facebook and others. Who is responsible? The companies or the law-makers? Because if you look up some of the hearings in Washington, D.C., some of the law-makers we see up there, they don't know how the internet works, and it's pretty obvious that this is a problem. >> Yeah, well that's why Jack Dorsey of Twitter posted yesterday that he banned not just political ads, but also issue ads. This isn't something that they're making him do, but he understands that when you're using AI to target people, that it's not okay. At some point, while Mark is sitting on (laughs) this committee and giving his testimony, he's essentially asking to be regulated because he can't regulate himself. He's like well, everyone's doing it, so I'm going to do it too. That's not an okay excuse. We see this in the labor market though actually, where there's existing laws that prevent discrimination. It's actually the company's responsibility to make sure that the products that they purchase from any vendor isn't introducing discrimination into that process. So its not even the vendor that's held responsible, it's the company and their use of it. We saw in the NYPD actually that one of those image recognition systems came up and someone said well, he looked like, I forget the name of what the actor was, but some actor's name is what the perpetrator looked like and so they used an image of the actor to try and find the person who actually assaulted someone else. And that's, it's also the user problem that I'm super concerned about. >> So John, what's your take on this? Because these are companies are in business to make money, for profit, they're not the government. And who's the role, what should the government do? AI has to move forward. >> Yeah, we're all responsible. The companies are responsible. The companies that we work with, I have yet to interact with customers, or with our customers here, that have some insidious goal, that they're trying to outsmart their customers. They're not. Everyone's looking to do the best and deliver the most relevant products in the marketplace. The government, they absolutely... The political structure we have, it has to be really intelligent and it's got to get up-skilled in this space and it needs to do it quickly, both at the economy level, as well as for our defense. But the individuals, all of us as individuals, we are already subjected to this type of artificial intelligence in our everyday lives. Look at streaming, streaming media. Right now every single one of us goes out through a streaming source, and we're getting recommendations on what we should watch next. And we're already adapting to these things, I am. I'm like stop showing me all the stuff you know I want to watch, that's not interesting to me. I want to find something I don't know I want to watch, right? So we all have to get educated, we're all responsible for these things. And again, I see a much more positive side of this. I'm not trying to get into the fear-mongering side of all the things that could go wrong, I want to focus on the good stories, the positive stories. If I'm in a courtroom and I lose a court case because I couldn't afford the best attorney and I have the bias of a judge, I would certainly like artificial intelligence to make a determination that allows me to drive an appeal, as one example. Things like that are really creative in the world that we need to do. Tampering down this wild speculation we have on the markets. I mean, we are all victims of really bad data decisions right now, almost the worst data decisions. For me, I see this as a way to actually improve all those things. Fraud fees will be reduced. That helps everybody, right? Less speculation and these wild swings, these are all helpful things. >> Well Ken, John and Charna, thank- (audio feedback) >> Go ahead, finish. Get that word in. >> Sorry. I think that point you were making though John, is we are still a capitalist society, but we're no longer a shareholder capitalist society, we are a stakeholder capitalist society and the stakeholder is the society itself. It is us, it what we want to see. And so yes, I still want money. Obviously there are things that I want to buy, but I also care about well-being. I think it's that little shift that we're seeing that is actually you and I holding our own teams accountable for what they do. >> Yeah, culture first is a whole new shift going on in these companies that's a for-profit, mission-based. Ken, John, Charna, thanks for coming on Around the CUBE, Unpacking AI. Let's go around the CUBE Ken, John and Charna in that order, and just real quickly, unpacking AI, what's your final word? >> (laughs) I really... I'm interested in John's take that there's a democratization coming provided these tools will be available to everyone. I would certainly love to believe that. It seems like in the past, we've seen no, that access to these kind of powerful, paradigm-changing tools tend to be concentrated among a very small group of people and the benefits accrue to a very small group of people. But I hope that doesn't happen here. You know, I'm optimistic as well. I like the utopian side where we all have this amazing access to information and so many new problems can get solved with amazing amounts of data that we never could've touched before. Though you know, I think about that. I try to let that help me sleep at night, and not the fact that, you know... every public figure I see on TV is kind of out of touch about technology and only one candidate suggests the universal basic income, and it's kind of a crackpot idea. Those are the kind of things that keep me up at night. >> All right, John, final word. >> I think it's beautiful, AI's beautiful. We're on the cusp of a whole new world, it's nothing but positivity I see. We have to be careful. We're all nervous about it. None of us know how to approach these things, but as human beings, we've been here before. We're here all the time. And I believe that we can all collectively get a better lives for ourselves, for the environment, for everything that's out there. It's here, it's now, it's definitely real. I encourage everyone to hurry up on their own education. Every company, every layer of government to start really embracing these things and start paying attention. It's catching us all a little bit by surprise, but once you see it in production, you see it real, you'll be impressed. >> Okay, Charna, final word. >> I think one thing I want to leave people with is what we incentivize is what we end up optimizing for. This is the same for human behavior. You're training a new employee, you put incentives on the way that they sell, and that's, they game the system. AI's specifically find the optimum route, that is their job. So if we don't understand more complex cost functions, more complex representative ways of training, we're going to end up in a space, before we know it, that we can't get out of. And especially if we're using uninspectable AI. We really need to move towards augmentation. There are some companies that are implementing this now that you may not even know. Zillow, for example, is using AI to give you a cost for your home just by the photos and the words that you describe it, but they're also purchasing houses without a human in the loop in certain markets, based upon an inspection later by a human. And so there are these big bets that we're making within these massive corporations, but if you're going to do it as an individual, take a Coursera class on AI and take a Coursera class on ethics so that you can understand what the pitfalls are going to be, because that cost function is incredibly important. >> Okay, that's a wrap. Looks like we have a winner here. Charna, you got 18, John 16. Ken came in with 12, beaten again! (both laugh) Okay, Ken, seriously, great to have you guys on, a pleasure to meet everyone. Thanks for sharing on Around the CUBE Unpacking AI, panel number two. Thank you. >> Thanks a lot. >> Thank you. >> Thanks. I've been defeated by artificial intelligence again! (all laugh) (upbeat music)
SUMMARY :
in the heart of Silicon Valley, and the role AI is playing in society around obsolescence. and realizing that the thing you thought made you special I think it's going to be positive But is AI going to dominate over humans? in the automotive industry we certainly saw You can see all the tech backlash. that people are starting to use it in the right way. Obviously golden age doesn't look that to us right now. that are only going to be magnified Is it going to be a golden age? We have to have catastrophe before, the tech is going to kill us. for the reaction to change from I really do think it's going to have to be, And public policy their reactions are. and they need to be there to be making this policy. the growth of machine learning. So that's got to be embedded in every layer of because in order to get that AI, the wall tells us something we don't want to know. the fact that we don't necessarily like the feeling they need to listen in order to make decisions. that we can't do it because we're so fearful Ethics is super important to set the agenda for society There is an entire field of ethics that needs to start Obviously because society needs to have ethics. And I feel like they would've just looked at you in Washington, D.C., some of the law-makers we see up there, I forget the name of what the actor was, Because these are companies are in business to make money, and I have the bias of a judge, Get that word in. and the stakeholder is the society itself. Ken, John and Charna in that order, and the benefits accrue to a very small group of people. And I believe that we can all collectively and the words that you describe it, Okay, Ken, seriously, great to have you guys on, (upbeat music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jack Dorsey | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Ken Jennings | PERSON | 0.99+ |
John Hinson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
John | PERSON | 0.99+ |
Ken | PERSON | 0.99+ |
December | DATE | 0.99+ |
Charna | PERSON | 0.99+ |
October 2019 | DATE | 0.99+ |
Mark | PERSON | 0.99+ |
September | DATE | 0.99+ |
Evotek | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
100 images | QUANTITY | 0.99+ |
1,000 images | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
US | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Washington, D.C. | LOCATION | 0.99+ |
more than nine percent | QUANTITY | 0.99+ |
200,000 civilians | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Star Trek | TITLE | 0.99+ |
10 companies | QUANTITY | 0.99+ |
Terminator 2 | TITLE | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
30 different countries | QUANTITY | 0.99+ |
Textio | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
NYPD | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
The Matrix | TITLE | 0.99+ |
Hollywood | ORGANIZATION | 0.99+ |
Watson | PERSON | 0.99+ |
Star Wars | TITLE | 0.99+ |
Washington | LOCATION | 0.99+ |
200 different elections | QUANTITY | 0.99+ |
First question | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
FICO | ORGANIZATION | 0.99+ |
15 lines | QUANTITY | 0.99+ |
Johnson and Johnson | ORGANIZATION | 0.99+ |
Skynet | ORGANIZATION | 0.98+ |
18 | QUANTITY | 0.98+ |
first reaction | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
30 years | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Around theCUBE, Unpacking AI Panel | CUBEConversation, October 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hello everyone, welcome to theCUBE studio here in Palo Alto. I'm John Furrier your host of theCUBE. We're here introducing a new format for CUBE panel discussions, it's called Around theCUBE and we have a special segment here called Get Smart: Unpacking AI with some great with some great guests in the industry. Gene Santos, Professor of Engineering in College of Engineering Dartmouth College. Bob Friday, Vice President CTO at Mist at Juniper Company. And Ed Henry, Senior Scientist and Distinguished Member of the Technical Staff for Machine Learning at Dell EMC. Guys this is a format, we're going to keep score and we're going to throw out some interesting conversations around Unpacking AI. Thanks for joining us here, appreciate your time. >> Yeah, glad to be here. >> Okay, first question, as we all know AI is on the rise, we're seeing AI everywhere. You can't go to a show or see marketing literature from any company, whether it's consumer or tech company around, they all have AI, AI something. So AI is on the rise. The question is, is it real AI, is AI relevant from a reality standpoint, what really is going on with AI, Gene, is AI real? >> I think a good chunk of AI is real there. It depends on what you apply it to. If it's making some sort of decisions for you, that is AI that's blowing into play. But there's also a lot of AI left out there potentially is just simply a script. So, you know, one of the challenges that you'll always have is that, if it were scripted, is it scripted because, somebody's already developed the AI and now just pulled out all the answers and just using the answers straight? Or is it active learning and changing on its own? I would tend to say that anything that's learning and changing on its own, that's where you're having the evolving AI and that's where you get the most power from. >> Bob what's your take on this, AI real? >> Yeah, if you look at Google, What you see is AI really became real in 2014. That's when the AI and ML really became a thing in the industry and when you look why did it become a thing in 2014? It's really back when we actually saw TensorFlow, open source technology really become available. It's all that Amazon Compute story. You know, you look what we're doing here at Mist, I really don't have to worry about compute storage, except for the Amazon bill I get every month now. So I think you're really seeing AI become real, because of some key turning points in the industry. >> Ed, your take, AI real? >> Yeah, so it depends on what lens you want to kind of look at it through. The notion of intelligence is something that's kind of ill defined and depending how how you want to interpret that will kind of guide whether or not you think it's real. I tend to all things AI if it has a notion of agency. So if it can navigate its problem space without human intervention. So, really it depends on, again, what lens you kind of want to look at it through? It's a set of moving goalposts, right? If you take your smartphone back to Turing When he was coming up with the Turing test and asked them if this intelligent, or some value intelligent device was AI, would that be AI, to him probably back then. So really it depends on how you kind of want to look at it. >> Is AI the same as it was in 1988? Or has it changed, what's the change point with AI because some are saying, AI's been around for a while but there's more AI now than ever before, Ed we'll start with you, what's different with AI now versus say in the late 80s, early 90s? >> See what's funny is some of the methods that we're using aren't different, I think the big push that happened in the last decade or so has been the ability to store as much data as we can along with the ability to have as much compute readily disposable as we have today. Some of the methodologies I mean there was a great Wired article that was published and somebody referenced called, method called Eigenvector Decomposition they said it was from quantum mechanic, that came out in 1888 right? So it really a lot of the methodologies that we're using aren't much different, it's the amount of data that we have available to us that represents reality and the amount of compute that we have. >> Bob. >> Yeah so for me back in the 80s when I did my masters I actually did a masters on neural networks so yeah it's been around for a while but when I started Mist what really changed was a couple things. One is this modern cloud stack right so if you're going to have to build an AI solution really have to have all the pieces ingest tons of data and process it in real time so that is one big thing that's changed that we didn't have 20 years ago. The other big thing is we had access to all this open source TensorFlow stuff right now. People like Google and Facebook have made it so easy for the average person to actually do an AI project right? You know anyone here, anyone in the audience here could actually train a machine learning model over the weekend right now, you just have to go to Google, you have to find kind of the, you know they have the data sets you want to basically build a model to recognize letters and numbers, those data sets are on the internet right now and you personally yourself could go become a data scientist over the weekend. >> Gene, your take. >> Yeah I think also on top of that because of all that availability on the open software anybody can come in and start playing with AI, it's also building a really large experience base of what works and what doesn't work and because they have that now you can actually better define the problem you're shooting for and when you do that you increase you know what's going to work, what's not going to work and people can also tell you that on the part that's not going to work, how's it going to expand but I think overall though this comes back to the question of when people ask what is AI, and a lot of that is just being focused on machine learning and if it's just machine learning that's kind of a little limited use in terms of what you're classifying or not. Back in the early 80s AI back then is really what people are trying to call artificial general intelligence nowadays but it's that all encompassing piece. All the things that you know us humans can do, us humans can reason about, all the decision sequences that we make and so you know that's the part that we haven't quite gotten to but there is all the things that's why the applications that the AI with machine learning classification has gotten us this far. >> Okay machine learning is certainly relevant, it's been one of the most hottest, the hottest topic I think in computer science and with AI becoming much more democratized you guys mentioned TensorFlow, a variety of other open source initiatives been a great wave of innovation and again motivation, younger generations is easier to code now than ever before but machine learning seems to be at the heart of AI and there's really two schools of thought in the machine learning world, is it just math or is there more of a cognition learning machine kind of a thing going on? This has been a big debate in the industry, I want to get your guys' take on this, Gene is machine learning just math and running algorithms or is there more to it like cognition, where do you guys fall on this, what's real? >> If I look at the applications and look what people are using it for it's mostly just algorithms it's mostly that you know you've managed to do the pattern recognition, you've managed to compute out the things and find something interesting from it but then on the other side of it the folks working in say neurosciences, the first people working in cogno-sciences. You know I have the interest in that when we look at that, that machine learning does it correspond to what we're doing as human beings, now because the reason I fall more on the algorithm side is that a lot of those algorithms they don't match what we're often thinking so if they're not matching that it's like okay something else is coming up but then what do we do with it, you know you can get an answer and work from it but then if we want to build true human intelligence how does that all stack together to get to the human intelligence and I think that's the challenge at this point. >> Bob, machine learning, math, cognition is there more to do there, what's your take? >> Yeah I think right now you look at machine learning, machine learning are the algorithms we use, I mean I think the big thing that happened to machine learning is the neural network and deep learning, that was kind of a mild stepping stone where we got through and actually building kind of these AI behavior things. You know when you look what's really happening out there you look at the self driving car, what we don't realize is like it's kind of scary right now, you go to Vegas you can actually get on a driving bus now, you know so this AI machine learning stuff is starting to happen right before our eyes, you know when you go to the health care now and you get your diagnosis for cancer right, we're starting to see AI in image recognition really start to change how we get our diagnosis. And that's really starting to affect people's lives. So those are cases where we're starting to see this AI machine learning stuff is starting to make a difference. When we think about the AI singularity discussion right when are we finally going to build something that really has human behavior. I mean right now we're building AI that can actually play Jeopardy right, and that was kind of one of the inspirations for my company Mist was hey, if they can build something to play Jeopardy we should be able to build something answer questions on par with network domain experts. So I think we're seeing people build solutions now that do a lot of behaviors that mimic humans. I do think we're probably on the path to building something that is truly going to be on par with human thinking right, you know whether it's 50 years or a thousand years I think it's inevitable on how man is progressing right now if you look at the technologically exponential growth we're seeing in human evolution. >> Well we're going to get to that in the next question so you're jumping ahead, hold that thought. Ed, machine learning just math, pattern recognition or is there more cognition there to be had? Where do fall in this? >> Right now it's, I mean it's all math, so we collect something some data set about the world and then we use algorithms and some representation of mathematics to find some pattern, which is new and interesting, don't get me wrong, when you say cognition though we have to understand that we have a fundamentally flawed perspective on how maybe the one guiding light that we have on what intelligence could be would be ourselves right. Computers don't work like brains, brains are what we determine embody our intelligence right, computers, our brains don't have a clock, there's no state that's actually between different clock cycles that light up in the brain so when you start using words like cognition we end up trying to measure ourselves or use ourselves as a ruler and most of the methodologies that we have today don't necessarily head down that path. So yeah that's kind of how I view it. >> Yeah I mean stateless those are API kind of mindsets, you can't run Kubernetes in the brain. Maybe we will in the future, stateful applications are always harder than stateless as we all know but again when I'm sleeping, I'm still dreaming. So cognition in the question of human replacement. This has been a huge conversation. This is one, the singularity conversation you know the fear of most average people and then some technical people as well on the job front, will AI replace my job will it take over the world is there going to be a Skynet Terminator moment? This is a big conversation point because it just teases out what could be and tech for good tech for bad. Some say tech is neutral but it can be shaped. So the question is will AI replace humans and where does that line come from. We'll start with Ed on this one. What do you see this singularity discussion where humans are going to be replaced with AI? >> So replace is an interesting term, so there I mean we look at the last kind of Industrial Revolution that happened and people I think are most worried about the potential of job loss and when you look at what happened during the Industrial Revolution this concept of creative destruction kind of came about and the idea is that yes technology has taken some jobs out of the market in some way shape or form but more jobs were created because of that technology, that's kind of our one again lighthouse that we have with respect to measuring that singularity in and of itself. Again the ill defined definition, or the ill defined notion of intelligence that we have today, I mean when you go back and you read some of the early papers from psychologists from the early 1900s the experiment specifically who came up with this idea of intelligence he uses the term general intelligence as kind of the first time that all of civilization has tried to assign a definition to what is intelligent right? And it's only been roughly 100 years or so or maybe a little longer since we have had this understanding that's been normalized at least within western culture of what this notion of intelligence is so singularity this idea of the singularity is interesting because we just don't understand enough about the one measure ruler or yardstick that we have that we consider intelligence ourselves to be able to go and then embed that inside of a thing. >> Gene what's your thoughts on this, reasoning is a big part of your research you're doing a lot of research around intent and contextual, all these cool behavioral things you know this is where machines are there to augment or replace, this is the conversation, your view on this? >> I think one of the things with this is that that's where the downs still lie, if we have bad intentions, if we can actually start communicating then we can start getting the general intelligence yeah I mean sort of like what Ed was referring to how people have been trying to define this but I think one of the problems that comes up is that computers and stuff like that don't really capture that at this time, the intentions that they have are still at a low level, but if we start tying it to you know the question of the terminator moment to the singularity, one of the things is that autonomy, you know how much autonomy that we give to the algorithm, how much does the algorithm have access to? Now there could be you know just to be on an extreme there could be a disaster situation where you know we weren't very careful and we provided an API that gives full autonomy to whatever AI we have to run it and so you can start seeing elements of Skynet that can come from that but I also tend to come to analysis that hey even with APIs, while it's not AI, APIs a lot of that also we have the intentions of what you're going to give us to control. Then you have the AI itself where if you've defined the intentions of what it is supposed to do then you can avoid that terminator moment in terms of that's more of an act. So I'm seeing it at this point. And so overall singularity I still think we're a ways off and you know when people worry about job loss probably the closest thing that I think that can match that in recent history is the whole thing on automation, I grew up at the time in Ohio when the steel industry was collapsing and that was a trade off between automation and what the current jobs are and if you have something like that okay that's one thing that we go forward dealing with and I think that this is something that state governments, our national government something we should be considering. If you're going to have that job loss you know what better study, what better form can you do from that and I've heard different proposals from different people like, well if we need to retrain people where do you get the resources from it could be something even like AI job pack. And so there's a lot of things to discuss, we're not there yet but I do believe the lower, repetitive jobs out there, I should say the things where we can easily define, those can be replaceable but that's still close to the automation side. >> Yeah and there's a lot of opportunities there. Bob, you mentioned in the last segment the singularity, cognition learning machines, you mentioned deep learning, as the machines learn this needs more data, data informs. If it's biased data or real data how do you become cognitive, how do you become human if you don't have the data or the algorithms? The data's the-- >> I mean and I think that's one of the big ethical debates going on right now right you know are we basically going to basically take our human biases and train them into our next generation of AI devices right. But I think from my point of view I think it's inevitable that we will build something as complex as the brain eventually, don't know if it's 50 years or 500 years from now but if you look at kind of the evolution of man where we've been over the last hundred thousand years or so, you kind of see this exponential rise in technology right from, you know for thousands of years our technology was relatively flat. So in the last 200 years where we've seen this exponential growth in technology that's taking off and you know what's amazing is when you look at quantum computing what's scary is, I always thought of quantum computing as being a research lab thing but when you start to see VC's and investing in quantum computing startups you know we're going from university research discussions to I guess we're starting to commercialize quantum computing, you know when you look at the complexity of what a brain does it's inevitable that we will build something that has basic complexity of a neuron and I think you know if you look how people neural science looks at the brain, we really don't understand how it encodes, but it's clear that it does encode memories which is very similar to what we're doing right now with our AI machine right? We're building things that takes data and memories and encodes in some certain way. So yeah I'm convinced that we will start to see more AI cognizance and it starts to really happen as we start with the next hundred years going forward. >> Guys, this has been a great conversation, AI is real based upon this around theCUBE conversation. Look at I mean you've seen the evidence there you guys pointed it out and I think cloud computing has been a real accelerant with the combination of machine learning and open source so you guys have illustrated and so that brings up kind of the final question I'd love to get each of you's thought on this because Bob just brought up quantum computing which as the race to quantum supremacy goes on around the world this becomes maybe that next step function, kind of what cloud computing did for revitalizing or creating a renaissance in AI. What does quantum do? So that begs the question, five ten years out if machine learning is the beginning of it and it starts to solve some of these problems as quantum comes in, more compute, unlimited resource applied with software, where does that go, five ten years? We'll go start with Gene, Bob, then Ed. Let's wrap this up. >> Yeah I think if quantum becomes a reality that you know when you have the exponential growth this is going to be exponential and exponential. Quantum is going to address a lot of the harder AI problems that were from complexity you know when you talk about this regular search regular approaches of looking up stuff quantum is the one that allows you now to potentially take something that was exponential and make it quantum. And so that's going to be a big driver. That'll be a big enabler where you know a lot of the problems I look at trying to do intentions is that I have an exponential number of intentions that might be possible if I'm going to choose it as an explanation. But, quantum will allow me to narrow it down to one if that technology can work out and of course the real challenge if I can rephrase it into say a quantum program while doing it. But that's I think the advance is just beyond the step function. >> Beyond a step function you see. Okay Bob your take on this 'cause you brought it up, quantum step function revolution what's your view on this? >> I mean your quantum computing changes the whole paradigm right because it kind of goes from a paradigm of what we know, this binary if this then that type of computing. So I think quantum computing is more than just a step function, I think it's going to take a whole paradigm shift of you know and it's going to be another decade or two before we actually get all the tools we need to actually start leveraging quantum computing but I think that is going to be one of those step functions that basically takes our AI efforts into a whole different realm right? Let us solve another whole set of classic problems and that's why they're doing it right now because it starts to let you be able to crack all the encryption codes right? You know where you have millions of billions of choices and you have to basically find that one needle in the haystack so quantum computing's going to basically open that piece of the puzzle up and when you look at these AI solutions it's really a collection of different things going underneath the hood. It's not this one algorithm that you're doing and trying to mimic human behavior, so quantum computing's going to be yet one more tool in the AI toolbox that's going to move the whole industry forward. >> Ed, you're up, quantum. >> Cool, yeah so I think it'll, like Gene and Bob had alluded to fundamentally change the way we approach these problems and the reason is combinatorial problems that everybody's talking about so if I want to evaluate the state space of anything using modern binary based computers we have to kind of iteratively make that search over that search space where quantum computing allows you to kind of evaluate the entire search space at once. When you talk about games like AlphaGo, you talk about having more moves on a blank 19 by 19 AlphaGo board than you have if you put 1,000 universes on every proton of our universe. So the state space is absolutely massive so searching that is impossible. Using today's binary based computers but quantum computing allows you to evaluate kind of search spaces like that in one big chunk to really simplify the aspect but I think it will kind of change how we approach these problems to Bob and Gene's point with respect to how we approach, the technology once we crack that quantum nut I don't think will look anything like what we have today. >> Okay thank you guys, looks like we have a winner. Bob you're up by one point, we had a tie for second but Ed and Gene of course I'm the arbiter but I've decided Bob you nailed this one so since you're the winner, Gene you guys did a great job coming in second place, Ed good job, Bob you get the last word. Unpacking AI, what's the summary from your perspective as the winner of Around theCUBE. >> Yeah no I think you know from a societal point of view I think AI's going to be on par with kind of the internet. It's going to be one of these next big technology things. I think it'll start to impact our lives and people when you look around it it's kind of sneaking up on us, whether it's the self driving car the healthcare cancer, the self driving bus, so I think it's here, I think we're just at the beginnings of it. I think it's going to be one of these technologies that's going to basically impact our whole lives or our next one or two decades. Next 10, 20 years is just going to be exponentially growing everywhere in all our segments. >> Thanks so much for playing guys really appreciate it we have an inventor entrepreneur, Gene doing great research at Dartmouth check him out, Gene Santos at Dartmouth Computer Science. And Ed, technical genius at Dell, figuring out how to make those machines smarter and with the software abstractions growing you guys are doing some good work over there as well. Gentlemen thank you for joining us on this inaugural Around theCUBE unpacking AI Get Smart series, thanks for joining us. >> Thank you. >> Thank you. >> Okay, that's a wrap everyone this is theCUBE in Palo Alto, I'm John Furrier thanks for watching. (upbeat funk music)
SUMMARY :
in the heart of Silicon Valley, and Distinguished Member of the Technical Staff is on the rise, we're seeing AI everywhere. the evolving AI and that's where you get in the industry and when you look and depending how how you want to interpret that of data that we have available to us to go to Google, you have to find All the things that you know us humans what do we do with it, you know you can to happen right before our eyes, you know or is there more cognition there to be had? of the methodologies that we have today of mindsets, you can't run Kubernetes in the brain. of job loss and when you look at what happened and what the current jobs are and if you have if you don't have the data or the algorithms? and I think you know if you look how people So that begs the question, five ten years out quantum is the one that allows you now Beyond a step function you see. because it starts to let you be able to crack the technology once we crack that quantum nut but Ed and Gene of course I'm the arbiter and people when you look around it you guys are doing some good work over there as well. in Palo Alto, I'm John Furrier thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bob | PERSON | 0.99+ |
Gene Santos | PERSON | 0.99+ |
Ed Henry | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
Gene | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
1988 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
1888 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Ohio | LOCATION | 0.99+ |
Bob Friday | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
October 2019 | DATE | 0.99+ |
thousands of years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
first question | QUANTITY | 0.99+ |
Dartmouth Computer Science | ORGANIZATION | 0.99+ |
one point | QUANTITY | 0.99+ |
1,000 universes | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five ten years | QUANTITY | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
decade | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
two schools | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Mist | ORGANIZATION | 0.97+ |
80s | DATE | 0.97+ |
late 80s | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
Juniper Company | ORGANIZATION | 0.97+ |
early 1900s | DATE | 0.97+ |
early 90s | DATE | 0.97+ |
second place | QUANTITY | 0.97+ |
20 years ago | DATE | 0.97+ |
early 80s | DATE | 0.97+ |
Dartmouth | ORGANIZATION | 0.96+ |
one needle | QUANTITY | 0.95+ |
last decade | DATE | 0.95+ |
500 years | QUANTITY | 0.93+ |
each | QUANTITY | 0.93+ |
100 years | QUANTITY | 0.93+ |
AlphaGo | ORGANIZATION | 0.93+ |
one algorithm | QUANTITY | 0.92+ |
Jeopardy | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
One | QUANTITY | 0.92+ |
one big thing | QUANTITY | 0.92+ |
Silicon Valley, | LOCATION | 0.92+ |
one thing | QUANTITY | 0.91+ |
19 | QUANTITY | 0.91+ |
TensorFlow | TITLE | 0.91+ |
Industrial Revolution | EVENT | 0.91+ |
millions of billions of choices | QUANTITY | 0.9+ |
Chris Williams, GreenPages | VTUG Winter Warmer 2019
>> From Gillette Stadium in Foxboro, Massachusetts, it's the CUBE. Covering VTUG Winter Warmer 2019. Brought to you by SiliconANGLE Media. >> I'm Stu Miniman, and this is theCUBE's coverage of the VTUG Winter Warmer 2019. Just had Rob Ninkovich from the New England Patriots on the program. And, happy to bring on the program, one of the co-leaders of this VTUG event, Chris Williams. Whose day job is as a cloud architect with GreenPages, but is co-leader here at VTUG, does some user groups, and many other things, and actually a CUBE alum, even. Back four years ago, the first year-- >> That's right. >> -that we did this, we had you on the program, but a few things have changed, you know... You have a little less hair. >> This got a little longer. A little less here. >> More gray hair. Things like that. We were talking, >> Funny how that works out. you know, Rob was, you know, talking about how he's 35, and we were, like, yeah, yeah, 35, I remember 35. >> A child. (laughing) >> Things like that. Just wait til you hit your 40's and stuff starts breaking. >> Oh, so much to look forward to. >> So, Chris, first of all, thank you. We love coming to an event like this. I got to talk to a few users on-air, and I talked to, you know, get a, just, great pulse of what's going on in the industry. Virtualization, cloud computing, and beyond. So, you know, we know these, you know, local events are done, you know, a lot of it is the passion of the people that do it, and therefore we know a lot goes into it. >> I appreciate it, thanks for having me on. >> Alright, so bring people up to speed. What's your life like today? What do you do for work? What do you do for, you know, the passion projects? >> Ah, so the passion projects recently have been a lot of, we're doing a Python for DevOp series on vBrownBag. For the AWS Portsmouth User Group, we're also doing a machine learning and robotics autonomous car driving project, using Python as well. And for VTUG, we're looking at a couple of different tracks, also with the autonomous driving, and some more of the traditional, like, VMware to CAS Cloud Hybrid training kind of things. >> Excellent, so in the near future, the robots will be replacing the users here, and we'll have those running around. >> I have my Skynet t-shirt on underneath here. >> Ah, yes, Skynet. (laughing) You know if you Tweet that out, anything about Skynet, there's bots that respond to you with, like, things from The Terminator movies. >> I built one of them. >> Did you? (laughter) Well, thank you. They always make me laugh, and if there's not a place for snark on Twitter, then, you know, all we have left is kind of horrible politics, so. >> That's true, that's true. >> Great, so, yeah, I mean, Cloud AI, robotics, you know, what's the pulse? When you talk to users here, you know, they started out, you know, virtualization. There's lots of people that are, "I'm rolling out my virtualization, "I'm expanding what use-cases I can use it on, "I might be thinking about how cloud fits into that, "I'm looking at, you know, VmMare and Amazon especially, "or Microsoft, how all those fit together." You know, what are you hearing, what drives some of those passion projects other than, you know, you're interested in 'em? >> So, a lot of what my passion projects are driven, it's kind of a confluence of a couple of different events. I'm passionate about the things that I work on, and when I get into a room with customers, or whatever like that, or with the end users, getting together and talking about, you know, what's the next step? So, we as users, as a user group and as a community, we're here to learn about not just what today is... what's happening today, but, what's going to keep us relevant in the future, what are the new things coming down the pipe. And, a lot of that is bending towards the things that I'm interested in, fortuitously. Learning how to take my infrastructure knowledge and parlay that into a DevOps framework. Learning how to take Python and some of the stuff that I'm learning from the devs on the AWS side, and teaching them the infrastructure stuff. So, it's a bi-directional learning thing, where we all come together to that magical DevOps unicorn in the middle, that doesn't really exist, but... >> Yeah, I tell you, we've had this conversation a few times here, and many times over the last few years especially, is that, there's lots of opportunities to learn. And, you know, >> Too many. >> is your job threatened? And, the only reason your job should be threatened, is if you think you can keep doing, year after year, what you were doing before, because chances are either you will be disrupted in the job, or if not, the people you're working for might be disrupted, because if they're not pushing you along those tracks, and the tools and the communities to be able to learn stuff is, I can learn stuff at a fraction of the cost in faster times. >> Yep. >> Might not learn as much, but I'm saying I can pick up new skills, I can start getting into cloud. You know, it's not $1000 and six months to get the first piece of it. >> Exactly. >> It might be 40 to 60 hours online. >> Yep. >> And, you know, cost you 30 to 100 bucks, so, it's... >> Yeah, the lift in training, is a lot easier because, you're basically swiping your credit card, and with AWS, you have a free tier for 12 months, that you can play with and just, you know, doodle around, and then... And figure things out. You don't have to buy a home lab, you don't have to buy NFR license, or get NFR licenses from Vmware. But, the catch to that is, you do have to do it. There's a... remember Charlie and the Chocolate Factory? >> Of course. >> Remember the dad was doing the toothpaste tubes, he was the guys screwing the toothpaste tubes onto the machines. At the end of the story, he got, you know, automated out of a job, because they had a machine screwing the toothpaste tubes on. And then, at the end, he was the guy fixing the machine that was screwing the toothpaste tubes on. >> Right. >> So, in our world, that infrastructure guy, who's been deploying manually virtual machines, there's a piece of code, there's an infrastructure code, that will do that for them now. They've got to know how to modify and refactor that piece of code, and get good. And, get good at that. >> Yeah, you know, I've talked to a couple of people, we talk about, you know, there's big, you know, vendor shows, and then there's, you know, regional user groups and meet-up's, and the like. Give us a little insight into, you know, let's start with VTUG specifically, and, you know, what you're doin' up in the Portland area. Would love to hear some of the dynamics now, you know, it feels like there's just been a ground swell for many years now, to drive those, you know, local, and many times, more specialized events, as opposed to bigger, broader events. >> Yeah, it's interesting, because we like the bigger, broader events, because it gets everybody together to talk about, things across a broad spectrum. So, here we have the infrastructure guys, and we have the DevOps guys, and we have a couple of Developers, and stuff like that. And so, getting that group think, that mind share, into one room together, gets everybody's creative juices flowing. So, people are starting to learn from each other, that the Dev's, are getting some ideas about how infrastructure works, the infrastructure guys are getting some ideas about, you know, how to, how to automate a certain piece of their job. To make that, you know, minimize and maximize a thousand times, you know, go away. So, I love... I love the larger groups because of that. The smaller groups are more specialized, more niche. So, like, when you get into a smaller version, then, it's mostly infrastructure guys, or mostly Dev's, or some mixture thereof. So, they both definitely have their place, and that's why I love doing both of them. >> Yeah, and, you know, what can you share, kind of, speeds and feeds of this show here. I know, it's usually over a thousand people >> Yep. >> You know, had, you know, bunch of keynotes going on. You know, we talked about The Patriots, in, you know, quite a number of, you know, technology companies, people that are the, kind of, SI's or VAR's in the mix. >> Yeah, so, we had, I think, 35 sponsers. We had, six different keynotes, or six general sessions. We talked about everything from Azure to AWS, to Vmware. We covered the gamate of the things that the users are interested in. >> You had... don't undersell the general sessions there. (laughing) There was one that was on, like, you know, Blockchain and Quantum computing, I heard. >> Yep, yep. >> There was, an Amazon session, that was just, geekin' out on the database stuff, I think, there. >> Yes, yeah, Graph tier, yep. >> So, I mean, you know, it's not just marketing slideware up there, I saw a bunch of code in many of the sessions. >> Oh yeah, yeah. >> You know, this definitely is, you know, I was talkin' with the Amazon... Randell earlier, here on the program, and said that-- >> The Amazon Randall. (laughing) >> Yeah, yeah, sorry, Randall from Amazon, here. >> He's a very large weber. >> Gettin' at the end of the day, I've done a few of these, but, you know, remember like, four years ago, the first, like, cloud 101 session here? >> Yeah, yep. >> And, I was like, you know, I probably could have given that session, but, everybody here was like, "Oh, my gosh", you know, I just found out about that electricity. >> Right. >> You know, that, this is amazing. And, today, most people, understand a little bit more of... We've gotten the 101, so, you know, I'm getting into more of the pieces of it, but. >> Yeah, it was really gratifiying because, the one that he gave was, all of the service, all of the new services, of which, there were like, more than 100, in 50 minutes or less. And, he talks really, really fast. And, everybody was riveted, we... I mean, people were coming in, even up until the last minute. And, they all got it. It wasn't like, what am I do... what am I going to do with this? It's, this is what I need to know, and this is valuable information. >> Yeah, we were having a lunch conversation, about, like, when you listen to a Podcast, what speed do you listen on? So, I tend to listen at about one and a half speed, normally. >> Me too, yep. >> You know, Frappe was sayin', he listens at 2x, normally. >> Does he really? >> Somebody like, Randall, I think I would, put the video up, and you can actually go into YouTube, and things like that, and adjust the speed settings, I might hit, put him down to 0.75, or something like that, >> Yeah, absolutely. >> Because absolutely, you know, otherwise, you can listen to it at full speed, and just like, pause and rewind, and then things like that. But, definitely, someone... I respect that, I'm from New Jersey, originally, I tend to talk a little faster, on camera I try to keep a steady pace, so that, people can keep up with my excitement. >> I do, I speed up too. He actually, does this everyday. He flies to a new city, does it once a day. So, he's, he's gotten... This is like rapid fire now. >> Alright, want to give you the final word, you know, VTUG, you know, I think, people that don't know it, you go to VTUG.com, A Big Winter Warmer, here. There's The Big Summer one, >> The Summer Slam. >> With the world famous, you know, Lobster Bake Fest, there, I've been to that one a few times. I know people that fly from other countries, to come to that one. What else should we know about? >> So, we're about to revamp the website, we've got some new and interesting stuff coming up on there. Now that, we also have our slack channel, everybody communicates on the backhand through that. We're going to start having some user content, for the website. So, people can start posting blog articles, and things of that nature, there. I'm going to start doing, like a little, AW... like learn AWS, on the VTUG blog, so, people can start, you know, ramping up on some of the basics and everything. And, and if, that gains traction, then, we'll maybe get into some more advanced topics, from Azure, and AwS, and Vmware of course, Vmware is always going to be there, that's... Some of the stuff that Cody is doing, Cody Jarklin is doing, over at Vmware, like the CAS stuff, where it's the shim layer, and the management of all the different clouds. That's some really, really cool stuff. So, I'm excited to showcase some of that on the website. >> Alright, wow. Chris Williams, really appreciate you coming. And, as always, appreaciate the partnership with the VTUG, to have us here. >> Thanks for havin' me. >> Alright, and thank you as always for watching. We always love to bring you the best community content, we go out to all the shows, help extract the signal for the noise. I'm Stu Miniman, thanks for watchin' The CUBE. (energetic music) (energetic music) (energetic music)
SUMMARY :
Brought to you by SiliconANGLE Media. one of the co-leaders of this VTUG event, Chris Williams. -that we did this, we had you on the program, This got a little longer. Things like that. you know, Rob was, you know, talking about how he's 35, (laughing) Just wait til you hit your 40's and stuff starts breaking. So, you know, we know these, you know, What do you do for, you know, the passion projects? and some more of the traditional, like, Excellent, so in the near future, I have my Skynet t-shirt there's bots that respond to you with, like, you know, all we have left is kind of horrible politics, so. "I'm looking at, you know, VmMare and Amazon especially, getting together and talking about, you know, And, you know, if you think you can keep doing, year after year, to get the first piece of it. And, you know, cost you 30 to 100 bucks, But, the catch to that is, you do have to do it. At the end of the story, he got, you know, They've got to know how to modify Would love to hear some of the dynamics now, you know, To make that, you know, minimize and maximize Yeah, and, you know, what can you share, You know, had, you know, bunch of keynotes going on. We covered the gamate of the things that the users like, you know, Blockchain and Quantum computing, I heard. geekin' out on the database stuff, I think, there. you know, it's not just marketing slideware up there, You know, this definitely is, you know, (laughing) And, I was like, you know, I probably could have We've gotten the 101, so, you know, I'm getting into all of the new services, of which, about, like, when you listen to a Podcast, You know, Frappe was sayin', he listens at 2x, put the video up, and you can actually go into Because absolutely, you know, otherwise, He flies to a new city, does it once a day. VTUG, you know, I think, people that don't know it, With the world famous, you know, Lobster Bake Fest, so, people can start, you know, the VTUG, to have us here. We always love to bring you the best community content,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Williams | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Rob Ninkovich | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
$1000 | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
Randall | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cody Jarklin | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2x | QUANTITY | 0.99+ |
Frappe | PERSON | 0.99+ |
New Jersey | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
35 | QUANTITY | 0.99+ |
Randell | PERSON | 0.99+ |
GreenPages | ORGANIZATION | 0.99+ |
35 sponsers | QUANTITY | 0.99+ |
0.75 | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Cody | PERSON | 0.99+ |
VmMare | ORGANIZATION | 0.99+ |
New England Patriots | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Gillette Stadium | LOCATION | 0.99+ |
60 hours | QUANTITY | 0.99+ |
Vmware | ORGANIZATION | 0.99+ |
more than 100 | QUANTITY | 0.99+ |
100 bucks | QUANTITY | 0.98+ |
four years ago | DATE | 0.98+ |
50 minutes | QUANTITY | 0.98+ |
VTUG | ORGANIZATION | 0.98+ |
Lobster Bake Fest | EVENT | 0.98+ |
SiliconANGLE Media | ORGANIZATION | 0.98+ |
Portland | LOCATION | 0.98+ |
Foxboro, Massachusetts | LOCATION | 0.98+ |
once a day | QUANTITY | 0.97+ |
VTUG Winter Warmer 2019 | EVENT | 0.97+ |
today | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
first piece | QUANTITY | 0.97+ |
six different keynotes | QUANTITY | 0.97+ |
VTUG | EVENT | 0.97+ |
YouTube | ORGANIZATION | 0.96+ |
six general sessions | QUANTITY | 0.96+ |
The Terminator | TITLE | 0.96+ |
A Big Winter Warmer | TITLE | 0.96+ |
over a thousand people | QUANTITY | 0.95+ |
Charlie and the Chocolate Factory | TITLE | 0.94+ |
AWS Portsmouth User Group | ORGANIZATION | 0.93+ |
2019 | DATE | 0.92+ |
one | QUANTITY | 0.9+ |
AwS | ORGANIZATION | 0.89+ |
about one and a half | QUANTITY | 0.88+ |
one room | QUANTITY | 0.87+ |
ORGANIZATION | 0.85+ | |
101 | QUANTITY | 0.8+ |
The CUBE | TITLE | 0.8+ |
CAS | ORGANIZATION | 0.79+ |
thousand times | QUANTITY | 0.76+ |
The Patriots | ORGANIZATION | 0.75+ |
NFR | ORGANIZATION | 0.73+ |
Azure | TITLE | 0.72+ |
year | DATE | 0.7+ |
Harley Davis, IBM - IBM Interconnect 2017 - #ibminterconnect - #theCUBE
>> Announcer: Live, from Las Vegas, it's theCUBE. Covering Interconnect 2017. Brought to you by IBM. >> Okay, welcome back everyone we're here live in Las Vegas at the Mandalay Bay, theCUBE's exclusive three day coverage of IBM Interconnect 2017, I'm John Furrier. My co-host, Dave Velliante. Our next guest is Harley Davis, who's the VP of decision management at IBM. Welcome to theCUBE. >> Thank you very much, happy to be here. >> Thanks for your time today, you've got a hot topic, you've got a hot area, making decisions in real-time with data being cognitive, enterprise strong, and data first is really, really hard. So, welcome to theCUBE. What's your thoughts? Because we were talking before we came on about data, we all love, we're all data geeks but the value of the data is all contextual. Give us your color on the data landscape and really the important areas we should shine a light on, that customers are actively working to extract those insights. >> So, you know, traditionally, decisions have really been transactional, all about taking decisions on systems of record, but what's happening now is, we have the availability of all this data, streaming it in real-time, coming from systems of record, data about the past, data about the present, and then data about the future as well, so when you take into account predictive analytics models, machine learning, what you get is kind of data from the future if I can put it that way and what's interesting is how you put it all together, look for situations of risk, opportunity, is there a fraud that's happening now? Is there going to be a lack of resources at a hospital when a patient checks in? How do we put all that context together, look into the future and apply business policies to know what to do about it in real-time and that's really the differentiating use cases that people are excited about now and like you say, it's a real challenge to put that together but it's happening. >> It's happening, and that's, I think that's the key thing and there's a couple megatrends going on right now that's really propelling this. One is machine learning, two is the big data ecosystem as we call it, the big data ecosystem has always been, okay, Hadoop was the first wave, then you saw Spark, and then you're seeing that evolving now to a whole nother level moving data at rest and data in motion is a big conversation, how to do that together, not just I'm a batch only, or real-time only, the integration of those two. Then you combine that with the power of cloud and how fast cloud computing, with compute power, is accelerating, those two forces with machine learning, and IOT, it's just amazing. >> It's all coming together and what's interesting is how you bridge the gap, how you bring it all together, how you create a single system that manages in real-time all this information coming in, how you store it, how you look at, you know, history of events, systems of record and then apply situation detection to it to generate events in real-time. So, you know, one of the things that we've been working on in the decision management lab is a system called decision server insights, which is a big real-time platform, you send a stream of events in, it gets information from systems of records, you insert analytics, predictive analytics, machine learning models into it and then you write a series of situation detection rules that look at all that information and can say right now this is what's happening, I link it in with what's likely to happen in the future, for example I can say my predictive analytics model says based on this data, executed right now, this customer, this transaction is likely, 90% likely to be a fraud and then I can take all the customer information, I can apply my rule and I can apply my business policy to say well what do I do about that? Do I let it go through anyway? Because it's okay, do I reject it? Do I send it to a human analyst? We got to put all that together. >> So that use case that you just described, that's happening today, that's state of the art today, so one of the challenges today, and we all know fraud detection's got much, much better in the last several years, it used to take, if you ever found it, it would take six months, right? And it's too late, but still a lot of false positives, that'll negate a transaction, now that's a business rule decision, right? But are we at the point where even that's going to get better and better and better? >> Well, absolutely. I mean the whole, there have been two main ways to do fraud detection in the past. The first one is kind of long scale predictive analytics that you train every few months and requires, you know, lots and lots of history of data but you don't get new use cases that come up in real-time, like you don't have the Ukrainian hacker who decides, you know, if I do a payment from this one website then I can grab a bunch of money right now and then you have the other alternative, which is having a bunch of human analysts who look for cases like that guy and put it in as business rules and what's interesting is to combine the two, to retrain the models in real-time, and still apply the knowledge that the human analysts can get in real-time, and that's happening every day in lots of companies now. >> And that idea of combining transactional data and analytics, you know, has become popularized over the last couple of years, one obvious use case there is ad-tech, right? Making offers to people, marketing, what's the state of that use case? >> Well, let's look at it from the positive perspective. What we are able to do now is take information about consumers from multiple sources, you can look at the interaction that you've had with them, let's say you're a financial services company, you get all sorts of information about a company, about a customer, sorry, from the CRM system, from the series of interactions you've had with them, from what they've looked at on your website, but you can also get additional information about them if you know them by their Twitter handle or other social media feeds, you can take information from their Twitter feeds, for example, apply some cognitive technology to extract information from that do sentiment analysis, do natural language processing, you get some sense of meaning about the tweets and then you can combine that in real-time in a system like the one I talked about to say ah, this is the moment, right here, where this guy's interested in a new car, we think he just got a promotion or a raise because he's now putting more money into the bank and we see tweets saying "oh I love that new Porsche 911, "can't wait to go look at it in the showroom," if we can put those things together in real-time, why not send him a proactive offer for a loan on a new car, or put him in touch with a dealer? >> No and sometimes as a consumer I want that, you know, when I'm looking for say, scarce tickets to a show or a play-off game or something and I want the best offer and I'm going to five or six different websites, and somebody were to make me an offer, "hey, here are better seats for a lower price," I would be thrilled. >> So geographic information is interesting too for that, so let's say, for example, that you're, you're traveling to Napa Valley and let's say that we can detect that you just, you know, took out some money from the bank, from your ATM in Napa, now we know you're in Napa, now we know that you're a good customer of the bank, and we have a deal with a tour operator, a wine tour operator, so let's spontaneously propose a wine tour to you, give you a discount on that to keep you a good customer. >> Yeah, so relevant offers like that, as a consumer I'd be very interested in. All too often, at least lately, I feel like we're in the first and second innings of that type of, you know, system, where many of the offers that you get are just, wow, okay, for three weeks after I buy the dishwasher, I'm getting dishwasher ads, but it's getting better, you can sort of see it and feel it. >> You can see it getting a little better. I think this is where the combination of all these technologies with machine learning and predictive analytics really comes to the fore and where the new tools that we have available to data scientists, things like, you know, the data scientist experience that IBM offers and other tools, can help you produce a lot more segmented and targeted analytics models that can be combined with all the other information so that when you see that ad, you say oh, the bank really understands me. >> Harley, one of the things that people are working on right now and most customers, your customers and potential customers that we talk to is I got the insights coming, and I'm working on that, and we're pedaling as fast as we can, but I need actionable insight, this is a decision making thing, so decisions are now what people want to do, so that's what you do, so there's some stats out there that decision making can be less than 30 minutes based on good data, the life of the data, as short as six seconds, this speaks to the data in motion, humans aside of it, I might be on my mobile phone, I might be looking at some industrial equipment, whatever, I could be a decision maker in the data center, this is a core problem, what are you guys doing in this area, because this is really a core problem. Or an opportunity. >> Well this all about leveraging, you know, event driven architectures, Kafka, Spark and all the tools that work with it so that we can grab the data in real-time as it comes in, we can associate it with the rest of the context that's relevant for making a decision, so basically with action, when we talk about actionable insights, what are we talking about? We're talking about taking data in real-time, structured, unstructured data, having a framework for managing it, Kafka, Spark, something like decision server insights in ODM, whatever, applying cognitive technology to turn some of the unstructured data into structured data, applying machine learning, predictive analytics, tools like SPSS to create a kind of prediction of what happens in the future and then applying business rules, something like operational decision management, ODM, in order to apply business policies to the insights we've garnered from the rest of the cycle so that we can do something about it, that's decision manager, that's-- >> So you were saying earlier on the use case about, I get some event data, I bring it in to systems of record, I apply some rules to it, I mean, that doesn't sound very hard, I mean, it's almost as if that's happening now-- >> It's hard. >> Well it's hard, let me get, this is my whole point, this is not possible years ago so that's one point, I want to get some color from you on that because this is ungettable, most of the systems, we even go back ten, five years ago, we siloed, so now rule based stuff seems trivial, practically, okay, by some rules, but it's now possible to put this package together and I know it's hard but conceptually those are three concepts that some would say oh, why weren't we doing this before? >> It's been possible for a long time and we have, you know, we have plenty of customers who combine, you know, who do something as simple as when you get approved for a loan, that's based on a score, which is essentially a predictive analytics model combined with business rules that say approve, not approve, ask for more documentations and that's been done for years so it's been possible, what's even more enabled now is doing it in real-time, taking into account a much greater degree of information, having-- >> John: More data sources. >> Data sources, things like social media, things like sensors from IoT, connected car applications, all sorts of things like that and then retraining the models more frequently, so getting better information about the future, faster and faster. >> Give an example of some use cases that you're working with customers on because I think that's fascinating and I think I would agree with you that it's been possible before but the concepts are known, but now it's accelerating to a whole nother level. Talk about some of the use cases end-to-end that you guys have done with customers. >> Let's think about something like an airline, that wants to manage its operations and wants to help its passengers manage operational disruptions or changes. So what we want to do now is, take a series of events coming from all sorts of sources, and that can be basic operational data like the airplanes, what's the airplane, is it running late, is it not running late, is the connection running late, combining it with things about the weather, so information that we get about upcoming weather events from weather analytics models, and then turning that into predicting what's going to happen to this passenger through his journey in the future so that we can proactively notify him that he should be either, we can rebook him automatically on a flight, we can provide him, if we know he's going to be delayed, we can automatically provide him amenities, notify the staff at the airport where he's going to be blocked, because he's our platinum customer, we want to give him lounge access, we want to give him his favorite drink, so combine all this information together and that's a use case-- >> When's this going to happen? >> That's life, that's life. >> I want to fly that airline. Okay, so we've been talking a lot about-- >> Mr. American Airlines? I'm not going to put you on the spot there, hold up, that'll get you in trouble. >> Oh yeah, it's a real life use case. >> And said oh hey, you're not going to make your connection, thanks for letting me know. Okay, so, okay we were talking a lot about the way things used to be, the way things are, and the way things are going to be or actually are today, in that last example, and you talked about event driven workloads. One of the things we've been talking about, at SiliconANGLE and on theCUBE is, is workloads, with batch, interactive, Hadoop brought back batch, and now we have what you call, this event driven workloads, we call it the continuous workloads, right? >> All about data immersion, we all call it different things but it's the same thing. >> Right, and when we look at our forecast, we're like wow, this is really going to hit, it hasn't yet, but it's going to hit the steep part of the s-curve, what do you guys expect in terms of adoption for those types of workloads, is it going to be niche, is it going to be predominant? >> I think it should be predominant and I think companies want it to be predominant. What we still need, I think, is a further iteration on the technology and the ability to bring all these different things together. We have the technologies for the different components, we have machine learning technology, predictive analytics technology, business rules technology, event driven architecture technology, but putting it all together in a single framework, right now it's still a real, it's both a technology implementation challenge, and it's an organizational challenge because you have to have data scientists work with IT architects, work with operational people, work with business policy people and just organizationally, bringing everybody-- >> There's organizational gap. That's what you're talking about. >> Yeah, but every company wants it to happen, because they all see a competitive advantage in doing it this way. >> And what's some of the things that are, barriers being removed as you see them, because that is a consistent thing we're hearing, the products are getting better, but the organizational culture. >> The easy thing is the technology barriers, that's the thing, you know? That's kind of the easy thing to work on, how do we have single frameworks that bring together everything, that let you develop both the machine learning model, the business rules model, and optimization, resource optimization model in a single platform and manage it all together, that's, we're working on that, and that's going to be-- >> I'll throw a wrinkle into the conversation, hopefully a spark, pun intended. Open source and microservices and cloud native apps are coming, that are, with open source, it's actually coming in and fueling a lot more activity. This should be a helpful thing to your point about more data sources, how do you guys talk about that? Because that's something you have to be part of, enabling the inbound migration of new stuff. >> Yeah, we have, I mean, everything's part of the environment. It's been the case for a while that open source has been kind of the driver of a lot of innovation and we assimilate that, we can either assimilate it directly, help our customers use it via services, package it up and rebrand open source technology as services that we manage and we control and integrate it for, on behalf of our customers. >> Alright, last question for you. Future prediction, what's five years out? What's going to happen in your mind's eye, I'm not going to hold you, I mean IBM to this, you personally, just as you see some of this stuff unfolding, machine learning, we're expecting that to crank things up pretty quickly, I'm seeing cognitive, and cognitive to the core, really rocking and rolling here, so what's your, how'd you see the next five years playing out for decision making? >> The first thing is, I don't see Skynet ever happening, I think we're so-- >> Mark Benioff made a nice reference in the keynote about Terminator, I'm like no one pick up on that on Twitter. >> I don't think that's really, nearly impossible, as a scenario but of course what is going to happen and what we're seeing accelerating on a daily basis, is applying machine learning, cognitive technology to more and more aspects of our daily life but I see it, it's in a passive way, so when you're doing image recognition, that's passive, you have to tell the computer tell me what's in this image but you, the human, as the developer or the programmer, still has to kick that off and has to say okay, now that you've told me there's a cat in an image, what do I do about that and that's something a human still has to do and that's, you know, that's the thing that would be scary if our systems started saying we're going to do something on behalf of you because we understand humans completely and what they need so we're going to do it on your behalf, but that's not going to happen. >> So the role of the human is critical, paramount in all this. >> It's not going to go away, we decide what our business policies are and-- >> But isn't, well, autonomous vehicles are an example of that, but it's not a business policy, it's the car making a decision for us, cos we can't react fast enough. >> But the car is not going to tell you where you want to go. If it started, if you get in the car and it said I'm taking you to the doctor because you have a fever, maybe that will happen. (all laugh) >> That's kind of Skynet like. I'd be worried about that. It may make a recommendation. (all laugh) >> Hey, you want to go to the doctor, thank you, no I'm good. >> I really don't see Skynet happening but I do think we're going to get more and more intelligent observations from our systems and that's really cool. >> That's very cool. Harley, thanks so much for coming on theCUBE, sharing the insights, really appreciate it. theCUBE, getting the insights here at IBM Interconnect 2017, I'm John Furrier, stay with us for some more great interviews on day three here, in Las Vegas, more after this short break. (upbeat music)
SUMMARY :
Brought to you by IBM. at the Mandalay Bay, and really the important areas and that's really the that's the key thing and there's a couple and then you write a series and then you have the other alternative, and then you can combine that in real-time you know, when I'm looking for and let's say that we can detect of that type of, you know, system, so that when you see that ad, you say oh, so that's what you do, so about the future, faster and faster. and I think I would agree with you so that we can proactively Okay, so we've been talking a lot about-- I'm not going to put you and now we have what you call, immersion, we all call it on the technology and the ability That's what you're talking about. in doing it this way. but the organizational culture. how do you guys talk about that? been kind of the driver mean IBM to this, you personally, in the keynote about Terminator, and that's, you know, So the role of the human is critical, it's the car making a decision for us, and it said I'm taking you to the doctor That's kind of Skynet like. Hey, you want to go to the doctor, and that's really cool. sharing the insights,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
five | QUANTITY | 0.99+ |
Dave Velliante | PERSON | 0.99+ |
Mark Benioff | PERSON | 0.99+ |
Harley Davis | PERSON | 0.99+ |
Napa | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Napa Valley | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
six seconds | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Harley | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
less than 30 minutes | QUANTITY | 0.99+ |
Porsche | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
three concepts | QUANTITY | 0.99+ |
Spark | TITLE | 0.98+ |
three day | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
two forces | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
six different websites | QUANTITY | 0.98+ |
Terminator | TITLE | 0.97+ |
911 | COMMERCIAL_ITEM | 0.97+ |
day three | QUANTITY | 0.97+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
Kafka | TITLE | 0.96+ |
ORGANIZATION | 0.96+ | |
five years ago | DATE | 0.95+ |
single system | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
Interconnect 2017 | EVENT | 0.94+ |
two main ways | QUANTITY | 0.93+ |
single platform | QUANTITY | 0.93+ |
single frameworks | QUANTITY | 0.93+ |
first thing | QUANTITY | 0.93+ |
single framework | QUANTITY | 0.91+ |
three weeks | QUANTITY | 0.91+ |
years ago | DATE | 0.91+ |
Skynet | ORGANIZATION | 0.88+ |
first | EVENT | 0.87+ |
Ukrainian | OTHER | 0.87+ |
first one | QUANTITY | 0.85+ |
one website | QUANTITY | 0.84+ |
ten | DATE | 0.83+ |
last couple of years | DATE | 0.82+ |
second innings | QUANTITY | 0.8+ |
last several years | DATE | 0.8+ |
SPSS | TITLE | 0.79+ |
Hadoop | TITLE | 0.74+ |
years | QUANTITY | 0.71+ |
wave | EVENT | 0.62+ |
plenty of customers | QUANTITY | 0.6+ |
next five years | DATE | 0.56+ |
couple | QUANTITY | 0.56+ |
Andy Lin, Mark III Systems - IBM Interconnect 2017 - #ibminterconnect - #theCUBE
>> Man: Let me check. >> Announcer: Live from Las Vegas, it's The Cube. Covering InterConnect 20 17. Brought to you by IBM. >> Okay, welcome back everyone. Day two, we are here live in Las Vegas for IBM InterConnect. This is Silicon Angle's The Cube coverage of IBM's cloud event. The CEO, Ginni Rometty, was just on stage. We're kickin' off wall to wall coverage for three days. I'm John Furrier, my co-host, Dave Vellante, here for all three days. >> And, our next guest is Andy Lin, who's the VP of (mumbles) Mark Three Systems. A, 20 plus year IBM platinum partner. Doin' some real cutting edge work with cognitive as Ginny Rometty said cognitive to the core, is IBM's core strategy. Data first, enterprise strong is kind of the buzz words. Andy, welcome to The Cube. Appreciate you comin' on. >> Thanks for havin' me. >> So, obviously, enterprise strong, you know, it's, it's a kind of whole nother, you know, conversation that we can go deep on, but data first and cognitive to the core is really kind of the things that you guys are really getting into. All kinds of data types. Automating it and making it almost frictionless to move insights out. So, take a minute to explain what Mark Three's doing and what your role is with the company. >> Sure. Absolutely. So, I'm Vice President of strategy in Mark Three, so I work sort of across all our initiatives, especially areas that are emerging. Just a little bit about Mark Three, just historically for background purposes. So, we're a 22 year IBM platinum partner, as you pointed out. We actually started in the mid 90's, actually doing IT infrastructure around the IBM stack at that time. So, we sort of been with IBM over the last 20 years since the beginning. We've sort of grown up throughout the stack as IBM's evolved over the last two decades. About two and a half years ago, we started a digital development unit, called BlueChasm. And what BlueChasm does, is it basically builds open digital and cognitive platforms on the IBM cloud that are around a lot of services you pointed out. And, we basically designed it based on use cases that the ecosystem and our clients talk about. And, to give you a couple examples, one of the, one of the big ones that we're seeing a lot of interest around is called video recon. Video recon is a video analytics platform that's API enabled and open at it's core. So, regardless of where the video comes from, if it's a content management system, if it's a camera, we're able to basically take in that video, basically watch and listen to the video using Watson and some elements of our own intellectual property. And, then basically return insights based on what it sees and hears along with time stamps, back to the user to actually take action. >> Yeah. I love the name BlueChasm. It brings up, you know, Jeffrey Moore's Crossing the Chasm. Blue, IBM, big blue, so you know, it's a nice clever play. The BlueChasm opportunity. So, in your mind, for people watching, squint through some of the trends and extract out where you see these opportunities. Because if you're talkin' about new opportunities are emerging because of cloud horsepower and compute and storage and all the greatness of cloud, and you got real time analytics kind of really hittin' the main stream. That's going to, that's highlighted by internet of things is you can't go anywhere these days without hearing about autonomous vehicles, industrial (mumbles) things, AI, Mark Benioff was sayin', you know, we've seen the movies like Terminator and we've all dreamed about AI, so we can kind of get excited about the prospects. But, the chasm you're talkin' about, this is where these things that were ungettable before, unreachable new things, what are some of those things that you guys are doin' in that chasm? >> Yeah, so I think some of the things that we're doing are basically enabling, like I'll use video recon as an example, right, we're enabling a class to be able to get new insights using basically computer vision, but in an open and accessible way, that they've never had been able to do before. Vision itself, I don't think is new or revolutionary. You know, a lot of folks are doing it, self driving cars, etcetera. >> John: Yeah. >> But, I think what is new is being able to make it open and easily accessible to the normal enterprise, the normal service provider. Up to now, it's been, you know you've had, really had to have your own team of, you know, really, really deep AI develops or PHD's to be able to produce it for your own platform. What we're trying to do is basically demarketize that. >> John: Yeah. >> So, to give you an example, some use cases that we're, we're sort of working on today, the ability to do things like read meters and gages, as an example, with a camera. That way you can avoid a situation where somebody has to walk around all the time, you know, look at different things that could be dangerous. That there could be issues actually looking at what you see from a metering perspective. Or to be able to, for instance, for in the media entertainment industry or the video production industry, be able to do things like identify shot types, be able to more quickly allow our enterprise users in that particular space to be able to create video content quickly. And, the underlying theme with all this, I think it's really about speed to market. And, how quickly can you iterate and please whatever your customers in that particular space that you're in. >> So with the video recon, so your, your videos are searchable, essentially. >> (Andy) Correct. >> So, so what do you do? Use Watson, natural language processing to sort of translate them? Now (mumbles), of course, you know, NLP is maybe I don't know 75, 80 percent accurate, how do you close that gap? >> Yeah, so video recon does both visual and audio. So, the audio portion you are correct. There is some degree of trade off in accuracy relative to what I think the average human can do today. Assuming the human is focused and able to really tag these videos accurately. So, we are able to train it based on things like proper words and things that are enterprise focused. Because I know there, there are a lot of different ways that I think you can maybe attack this today from a video analytics perspective, where we're focused primarily just on the enterprise, solving business problems with, with video analytics. So, you know, taking advantage of if Watson improves, cause we do use (mumbles) tech at it's core from, on the audio perspective. Applying some of our own techniques to basically improve the accuracy of certain words that matter most to the enterprise. One of the things we've noticed is it's an entirely collaborative relationship with our, with our, with our enterprise clients but really partners. Because what works well for one, may not work well for another. One thing about cognitive is it really depends on the end user as to if this is a good idea or not. Or if this will work for their use case, just based on error, as you pointed out. >> So, to your point, you're identifying enterprise use cases and then tuning the system. Building solutions, essentially, for those use cases. >> Andy: Absolutely. >> Now, you said 22 year IBM platinum partner, so you obviously started well before this so-called digital transformation. >> Andy: Yes. >> You see digital transformation as, you know, revolutionary, or is it more of an evolution of your business? >> I'd definitely say it's an evolution. I think, you know, a lot of the industry buzz words out there are all around, you know, transformation or transition, but for us it's been completely additive. You know, at the end of the day we're just doing what our clients want, you know. And, we're still continuing the core part of our business around modernizing and optimizing IT infrastructure, tech sacks in the data center, also infrastructure service in the cloud. Also, up through the middle where it's still really as strong as ever. I mean, in fact that business has actually been very much reinforced by some of these capabilities that we brought in on the digital development side. Because, at the end of the day, you know, clients may have a digital unit and they may have, you know, IT, but they're really viewed sort of all in the same. A lot of people try to put 'em in two different buckets bimodal or whatever you want to use. But, you know, inevitably, you know, clients just see a business problem they want to address. >> Yep. >> And, they're saying how can I address it the fastest and the most effectively as relative to what their stakeholders want. And, we just realized early on that we had to have that development capability, be able to build platforms, but also guide out clients. If they don't want one of our platforms, if they don't want video recon or cognitive call center platform, that's perfectly fine. We're more than happy to guide them on how to build something similar for their developers with our developers relative to their tech stack, you know, hopefully on the IBM cloud. >> Andy, one of the things you were pointing out that I think is worth highlighting is the digital transformation buzz word, which has been around for a few years now, really is in main stream right now. >> Andy: Yes. >> People are really working hard to figure this out. We're seeing the disruption on the business model side. You mentioned speed and time to market, that's agility. That's not just a technical development term anymore. It's actually business model. It's business related. >> Andy: Yes. >> But there's two axes of things going on. There's the under the hood, heavy lifting stuff that goes on around getting stuff digitally to work. That's IT, security, and you know, Ginni Rometty talks about a lot of that on stage. That's being enterprise grade or enterprise strong. The other one is this digitization of the real world, right? So, that's creative. That requires insights. That requires kind of a different, it's actually probably maybe more fun for some people, but I mean it depends on who your profile is, but you have kind of two spectrums. Cool and relevant and exciting and intoxicating, creative, user experience driven. You mentioned reading meters. >> Andy: Yeah. >> That's the analog world. >> Andy: Yes. >> That's actually space. That's the world. That's like, you got the sky you got the meter. >> Andy: Yeah. >> You got physical impressions. This is the digitization of our world. What's your perspective? How do you talk to customers when they say, "Hey I want to digitize my business." >> Andy: Mm hmm. >> How does it go? What do you say? I mean, do you break it down into those axes? Do you go, did they see it that way? Can you share some color on this digital transformation of digitizing business? >> Yeah, so I mean it really depends on, I think, it normally it has to do with interacting with some other stakeholders in a certain way, you know. I think from our perspective it really is about, you know, how they want to interface. And, most of the time you pointed out speed. Speed I think is the number one reason why people are doing the digital transformation. It's not really about cost or these other factors. It's how quickly can I adjust my business model so I can win in the market place? And, you know, I think I pointed this earlier, but like, you know IOT is huge now. It covers what I call three out of the five senses in my mind. It covers basically touch, smell and taste in many ways. And, for us, I think we're basically trying to help them even get beyond IOT with video. Video really covers, you know, sight and hearing as well. It covers all the five senses. And, then you take that and figure out how do I digitize that experience and be able to allow you to interact with your stakeholders. Whether it be your customers, your suppliers or your partners out in the market place. And, then based on that we'll take these building blocks on how we, you know, extend the experience, and work with them on their specific use case. >> So, you got to ingest the data, which is the, you know, the images or data coming in. >> Correct. >> Then you got to prep it available for insights. >> Correct. >> And, produce them in, like really fast. >> Andy: Yep. >> That's hard. >> Andy: It is, yeah. >> It's not trivial. >> No it is not, it's not a trivial problem. Yeah, absolutely. And, I think, you know, there's a lot of opportunity here in the space over the next I think two to two to five years. But you're absolutely right. >> John: Yeah. >> I mean it is, it is a challenging. >> And, I want to get your thoughts too, and if you can share your reaction to some of the trends around machine learning, for instance. It's really kind of fueling this democratization. >> Andy: Yeah. >> You mention in the old days it was really hard, there was kind of a black art to, to machine learning or unique special, specialties. And, even data science that's at one level was really, really hard. Now you have common people doing things with visualization. What's the same with machine learning? I mean, you got more data sets coming in. Do you see that trend relevant to what you guys are working on in BlueChasm? >> Absolutely. I think at the core of it, and this wasn't our plan initially three years ago, we didn't realize that this was happen, but every single one of the platforms or prototypes or apps we've built, they all incorporate some degree of machine learning, deep learning within it's core. And, this is primarily just driven by I think what, to give a client a unique platform or a unique service on the market. Because, much of the base digitization, I mean Ginny likes to talk a lot about, you know, the key to being, differentiating yourself from digital world is being cognitive. And, we've seen this really play out in practice. And, I think what's changed, as you pointed out is, that it's easily accessible now to sort of the common man, as I put it. In years past, you really had to have people that are highly specialized. You build your own product. But now through open source- >> There's building blocks out there. >> Absolutely. >> You can just take an open source library and say hey, and then tweak the machine learning. >> Absolutely. And, the ramp up time has come down, you know, dramatically, even for our developers. Just watching them work. I mean, the prototype to video recon was built over the course of a weekend by one of our developers. He just came in one Monday and said, you know, is this, is this interesting? >> He's fired. >> Exactly. And, we were like, yes I think this is interesting. >> Well this is the whole inspiration thing that I talk about, the creativity. This is the two axes, right? >> You try to do that in the old days, I got to get a server provision. >> Andy: Yeah. >> I'm done. >> Andy: Right. >> You know, I'm going to go have a a beer. Whatever. I mean, there's almost an abandonment going on. We talked to Indiegogo yesterday about how they're funding companies. >> Andy: Yeah. >> You have this new creative action. >> Andy: Mm hmm. >> So you guys are seeing that. Any other examples you can share in terms of color around this kind of innovation? >> Yeah, so we, at BlueChasm we try to let our developers sort of have free reign over what they like to create. So video recon was spawned literally by a, on a side project, you know as with a lot of companies. It was, you know, a platform that sort of evolved into a commercial product, almost by accident, right? And, we've had others that have been anchored by like what clients had done, but like around the cognitive call center, which basically takes phone calls that are recorded and then basically transcribes and makes them easily searchable for audit reasons, training reasons, etcetera. Same kind of idea. We built things around like cognitive drones. A lot of folks are trying to do things with drones. Drones themselves aren't really not novel anymore, but being able to utilize them to collect data in unique ways, I think that industry is definitely evolving. We've built other things like, what I call the minority report board, after the scene in the movie where the board sort of looks at you and then based on what it sees of you, of different data points, it shows you an ad or shows you a piece of visual content to allow you to interact. >> John: Yeah. >> I mean, these are, these are examples. You know, we have others. But, you know we've just seen like in this organization if we allow creativity to sort of reign, you know, have free reign. We're able to sort of bring it back in along with some of the strengths of core Mark Three about being (mumbles). >> I mean the cognitive is really interesting. It's a programmatic approach to life. And, if you think about it, it's like if you have this collective intelligence with the data, you could offer an augmented reality experience- >> Andy: Yes. >> To anybody now, based upon what you're doin'. >> Absolutely. So I mean, I think that the toughest part I think right now is figuring out which of the opportunities to pursue. Because, there are so many out there and everyone has some interest in some degree, you know. You have to figure out how to prioritize about, you know, which, which of the ones you want to address first. >> John: Yeah. >> And, in what order. Because, what we've noticed is that a lot of these are building blocks that lead to other greater and greater platform concepts, and part of the challenge is figuring out what order you want to actually build these into. And, through you know, microservices through retainerization all these, you know, awesome evolutions as far as like with cloud and infrastructure technology, you're really able to piece together these pieces to build amazing (mumbles) quickly. >> The cloud native stuff is booming right now. >> Yeah. >> It's really fun to watch. Microservices, (mumbles), this orchestration, composability is just kickin' ass. >> Absolutely. >> And, all your clients are basically becoming software companies. They're takin' your services and building out their own sas capabilities. >> Andy: Right. >> Right? >> Without a doubt. I mean, you know the cloud (mumbles), container revolution's been significant for us. I mean we, we added the audio component to video recon based on some of the work we've been doing on the call center side. It was almost by accident. And, we were able to really put them together in a day because we were able to basically easily compose the overall platform at that time, or the prototype of the platform at that time just by linking together those services. So, we see this as a pattern moving forward. >> Andy, thanks for coming on The Cube. Really appreciate it. In the quick 30 seconds, what are you doin' here at the show? What are you guys talkin' about? What's some of the activity? Coolest thing you're seeing? Share some insight, what's going on here in Las Vegas. Share some perspective. >> Yeah, absolutely. So, we have a booth here in Vegas. We're demoing some of the platforms we talked about: video recon, cognitive call center. We're at booth six 87, which is toward the center back of the expo center. We have four break outs that we'll be doing as well. Talking about some of these concepts, as well as some of our projects that involve, you know, modernization of the data center as well. So, the true what I call IBM full stack. >> And, for the folks that aren't here watching, is there, the website address? Where can they go to get more information? >> Yeah, absolutely. You can go to Mark Three sys. M A R K triple I S Y S dot com, which is our website. If you want to learn a little bit more about video recon you can go to video recon dot I O. We have a very simple demo page, but you know, if you're interested in learning more or you want to explore if we can accommodate your specific use case, please feel free to reach out to me. Also, Mark Three systems, M A R K triple I systems at Twitter as well, and I can get back to you. >> Well, you know we're going to follow up with you. Going to get all of our Cube videos into the cognitive era. You'll be seeing us, pinging you online for that. >> Yeah. >> Love the video recon, just great. BlueChasm, great, great initiative. Congratulations on that. >> Thank you. >> Thanks for comin' on. Its The Cube live here in Las Vegas. Day two of coverage, wall to wall. I'm John Furrier with Dave Vellante. Stay with us. More great interviews after this short break.
SUMMARY :
Brought to you by IBM. of IBM's cloud event. is kind of the buzz words. strong, you know, it's, And, to give you a couple that you guys are doin' the things that we're doing Up to now, it's been, you know you've had, So, to give you an example, So with the video So, the audio portion you are correct. So, to your point, you're so you obviously started well before this I think, you know, a lot of relative to their tech stack, you know, Andy, one of the things on the business model side. of the real world, right? That's like, you got the This is the digitization of our world. to allow you to interact data, which is the, you know, Then you got to prep And, I think, you know, there's and if you can share your relevant to what you guys the key to being, differentiating You can just take an open I mean, the prototype to And, we were like, yes I that I talk about, the creativity. I got to get a server provision. We talked to Indiegogo yesterday So you guys are seeing that. to allow you to interact. sort of reign, you know, And, if you think about it, upon what you're doin'. the opportunities to pursue. And, through you know, microservices is booming right now. It's really fun to watch. And, all your clients I mean, you know the cloud (mumbles), what are you doin' here at the show? that involve, you know, demo page, but you know, Well, you know we're Love the video recon, just great. I'm John Furrier with Dave Vellante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
Andy Lin | PERSON | 0.99+ |
Ginny Rometty | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Mark Benioff | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
22 year | QUANTITY | 0.99+ |
five senses | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Mark Three Systems | ORGANIZATION | 0.99+ |
20 plus year | QUANTITY | 0.99+ |
mid 90's | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
BlueChasm | ORGANIZATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ginny | PERSON | 0.99+ |
one | QUANTITY | 0.98+ |
Mark III Systems | ORGANIZATION | 0.98+ |
Indiegogo | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
three years ago | DATE | 0.97+ |
Mark Three | PERSON | 0.97+ |
two axes | QUANTITY | 0.97+ |
75, 80 percent | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Day two | QUANTITY | 0.96+ |
About two and a half years ago | DATE | 0.96+ |
Blue | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
two different buckets | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
Monday | DATE | 0.94+ |
Watson | TITLE | 0.94+ |
a day | QUANTITY | 0.93+ |
Silicon Angle | ORGANIZATION | 0.93+ |
Jean Francois Puget, IBM | IBM Machine Learning Launch 2017
>> Announcer: Live from New York, it's theCUBE, covering the IBM machine learning launch event. Brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Alright, we're back. Jean Francois Puget is here, he's the distinguished engineer for machine learning and optimization at IBM analytics, CUBE alum. Good to see you again. >> Yes. >> Thanks very much for coming on, big day for you guys. >> Jean Francois: Indeed. >> It's like giving birth every time you guys give one of these products. We saw you a little bit in the analyst meeting, pretty well attended. Give us the highlights from your standpoint. What are the key things that we should be focused on in this announcement? >> For most people, machine learning equals machine learning algorithms. Algorithms, when you look at newspapers or blogs, social media, it's all about algorithms. Our view that, sure, you need algorithms for machine learning, but you need steps before you run algorithms, and after. So before, you need to get data, to transform it, to make it usable for machine learning. And then, you run algorithms. These produce models, and then, you need to move your models into a production environment. For instance, you use an algorithm to learn from past credit card transaction fraud. You can learn models, patterns, that correspond to fraud. Then, you want to use those models, those patterns, in your payment system. And moving from where you run the algorithm to the operation system is a nightmare today, so our value is to automate what you do before you run algorithms, and then what you do after. That's our differentiator. >> I've had some folks in theCUBE in the past have said years ago, actually, said, "You know what, algorithms are plentiful." I think he made the statement, I remember my friend Avi Mehta, "Algorithms are free. "It's what you do with them that matters." >> Exactly, that's, I believe in autonomy that open source won for machine learning algorithms. Now the future is with open source, clearly. But it solves only a part of the problem you're facing if you want to action machine learning. So, exactly what you said. What do you do with the results of algorithm is key. And open source people don't care much about it, for good reasons. They are focusing on producing the best algorithm. We are focusing on creating value for our customers. It's different. >> In terms of, you mentioned open source a couple times, in terms of customer choice, what's your philosophy with regard to the various tooling and platforms for open source, how do you go about selecting which to support? >> Machine learning is fascinating. It's overhyped, maybe, but it's also moving very quickly. Every year there is a new cool stuff. Five years ago, nobody spoke about deep learning. Now it's everywhere. Who knows what will happen next year? Our take is to support open source, to support the top open source packages. We don't know which one will win in the future. We don't know even if one will be enough for all needs. We believe one size does not fit all, so our take is support a curated list of mid-show open source. We start with Spark ML for many reasons, but we won't stop at Spark ML. >> Okay, I wonder if we can talk use cases. Two of my favorite, well, let's just start with fraud. Fraud has become much, much better over the past certainly 10 years, but still not perfect. I don't know if perfection is achievable, but lot of false positives. How will machine learning affect that? Can we expect as consumers even better fraud detection in more real time? >> If we think of the full life cycle going from data to value, we will provide a better answer. We still use machine learning algorithm to create models, but a model does not tell you what to do. It will tell you, okay, for this credit card transaction coming, it has a high probability to be fraud. Or this one has a lower priority, uh, probability. But then it's up to the designer of the overall application to make decisions, so what we recommend is to use machine learning data prediction but not only, and then use, maybe, (murmuring). For instance, if your machine learning model tells you this is a fraud with a high probability, say 90%, and this is a customer you know very well, it's a 10-year customer you know very well, then you can be confident that it's a fraud. Then if next fraud tells you this is 70% probability, but it's a customer since one week. In a week, we don't know the customer, so the confidence we can get in machine learning should be low, and there you will not reject the transaction immediately. Maybe you will enter, you don't approve it automatically, maybe you will send a one-time passcode, or you enter a serve vendor system, but you don't reject it outright. Really, the idea is to use machine learning predictions as yet another input for making decisions. You're making decision informed on what you could learn from your past. But it's not replacing human decision-making. Our approach with IBM, you don't see IBM speak much about artificial intelligence in general because we don't believe we're here to replace humans. We're here to assist humans, so we say, augmented intelligence or assistance. That's the role we see for machine learning. It will give you additional data so that you make better decisions. >> It's not the concept that you object to, it's the term artificial intelligence. It's really machine intelligence, it's not fake. >> I started my career as a PhD in artificial intelligence, I won't say when, but long enough. At that time, there were already promise that we have Terminator in the next decade and this and that. And the same happened in the '60s, or it was after the '60s. And then, there is an AI winter, and we have a risk here to have an AI winter because some people are just raising red flags that are not substantiated, I believe. I don't think that technology's here that we can replace human decision-making altogether any time soon, but we can help. We can certainly make some proficient, more efficient, more productive with machine learning. >> Having said that, there are a lot of cognitive functions that are getting replaced, maybe not by so-called artificial intelligence, but certainly by machines and automation. >> Yes, so we're automating a number of things, and maybe we won't need to have people do quality check and just have an automated vision system detect defects. Sure, so we're automating more and more, but this is not new, it has been going on for centuries. >> Well, the list evolved. So, what can humans do that machines can't, and how would you expect that to change? >> We're moving away from IMB machine learning, but it is interesting. You know, each time there is a capacity that a machine that will automate, we basically redefine intelligence to exclude it, so you know. That's what I foresee. >> Yeah, well, robots a while ago, Stu, couldn't climb stairs, and now, look at that. >> Do we feel threatened because a robot can climb a stair faster than us? Not necessarily. >> No, it doesn't bother us, right. Okay, question? >> Yeah, so I guess, bringing it back down to the solution that we're talking about today, if I now am doing, I'm doing the analytics, the machine learning on the mainframe, how do we make sure that we don't overrun and blow out all our MIPS? >> We recommend, so we are not using the mainframe base compute system. We recommend using ZIPS, so additional calls to not overload, so it's a very important point. We claim, okay, if you do everything on the mainframe, you can learn from operational data. You don't want to disturb, and you don't want to disturb takes a lot of different meanings. One that you just said, you don't want to slow down your operation processings because you're going to hurt your business. But you also want to be careful. Say we have a payment system where there is a machine learning model predicting fraud probability, a part of the system. You don't want a young bright data scientist decide that he had a great idea, a great model, and he wants to push his model in production without asking anyone. So you want to control that. That's why we insist, we are providing governance that includes a lot of things like keeping track of how models were created from which data sets, so lineage. We also want to have access control and not allow anyone to just deploy a new model because we make it easy to deploy, so we want to have a role-based access and only someone someone with some executive, well, it depends on the customer, but not everybody can update the production system, and we want to support that. And that's something that differentiates us from open source. Open source developers, they don't care about governance. It's not their problem, but it is our customer problem, so this solution will come with all the governance and integrity constraints you can expect from us. >> Can you speak to, first solution's going to be on z/OS, what's the roadmap look like and what are some of those challenges of rolling this out to other private cloud solutions? >> We are going to shape this quarter IBM machine learning for Z. It starts with Spark ML as a base open source. This is not, this is interesting, but it's not all that is for machine learning. So that's how we start. We're going to add more in the future. Last week we announced we will shape Anaconda, which is a major distribution for Python ecosystem, and it includes a number of machine learning open source. We announced it for next quarter. >> I believe in the press release it said down the road things like TensorFlow are coming, H20. >> But Anaconda will announce for next quarter, so we will leverage this when it's out. Then indeed, we have a roadmap to include major open source, so major open source are the one from Anaconda (murmuring), mostly. Key deep learning, so TensorFlow and probably one or two additional, we're still discussing. One that I'm very keen on, it's called XGBoost in one word. People don't speak about it in newspapers, but this is what wins all Kaggle competitions. Kaggle is a machine learning competition site. When I say all, all that are not imagery cognition competitions. >> Dave: And that was ex-- >> XGBoost, X-G-B-O-O-S-T. >> Dave: XGBoost, okay. >> XGBoost, and it's-- >> Dave: X-ray gamma, right? >> It's really a package. When I say we don't know which package will win, XGBoost was introduced a year ago also, or maybe a bit more, but not so long ago, and now, if you have structure data, it is the best choice today. It's a really fast-moving, but so, we will support mid-show deep learning package and mid-show classical learning package like the one from Anaconda or XGBoost. The other thing we start with Z. We announced in the analyst session that we will have a power version and a private cloud, meaning XTC69X version as well. I can't tell you when because it's not firm, but it will come. >> And in public cloud as well, I guess we'll, you've got components in the public cloud today like the Watson Data Platform that you've extracted and put here. >> We have extracted part of the testing experience, so we've extracted notebooks and a graphical tool called ModelBuilder from DSX as part of IBM machine learning now, and we're going to add more of DSX as we go. But the goal is to really share code and function across private cloud and public cloud. As Rob Thomas defined it, we want with private cloud to offer all the features and functionality of public cloud, except that it would run inside a firewall. We are really developing machine learning and Watson machine learning on a command code base. It's an internal open source project. We share code, and then, we shape on different platform. >> I mean, you haven't, just now, used the word hybrid. Every now and then IBM does, but do you see that so-called hybrid use case as viable, or do you see it more, some workloads should run on prem, some should run in the cloud, and maybe they'll never come together? >> Machine learning, you basically have to face, one is training and the other is scoring. I see people moving training to cloud quite easily, unless there is some regulation about data privacy. But training is a good fit for cloud because usually you need a large computing system but only for limited time, so elasticity's great. But then deployment, if you want to score transaction in a CICS transaction, it has to run beside CICS, not cloud. If you want to score data on an IoT gateway, you want to score other gateway, not in a data center. I would say that may not be what people think first, but what will drive really the split between public cloud, private, and on prem is where you want to apply your machine learning models, where you want to score. For instance, smart watches, they are switching to gear to fit measurement system. You want to score your health data on the watch, not in the internet somewhere. >> Right, and in that CICS example that you gave, you'd essentially be bringing the model to the CICS data, is that right? >> Yes, that's what we do. That's a value of machine learning for Z is if you want to score transactions happening on Z, you need to be running on Z. So it's clear, mainframe people, they don't want to hear about public cloud, so they will be the last one moving. They have their reasons, but they like mainframe because it ties really, really secure and private. >> Dave: Public cloud's a dirty word. >> Yes, yes, for Z users. At least that's what I was told, and I could check with many people. But we know that in general the move is for public cloud, so we want to help people, depending on their journey, of the cloud. >> You've got one of those, too. Jean Francois, thanks very much for coming on theCUBE, it was really a pleasure having you back. >> Thank you. >> You're welcome. Alright, keep it right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from the Waldorf Astoria. IBM's machine learning announcement, be right back. (electronic keyboard music)
SUMMARY :
Brought to you by IBM. Good to see you again. on, big day for you guys. What are the key things that we and then what you do after. "It's what you do with them that matters." So, exactly what you said. but we won't stop at Spark ML. the past certainly 10 years, so that you make better decisions. that you object to, that we have Terminator in the next decade cognitive functions that and maybe we won't need to and how would you expect that to change? to exclude it, so you know. and now, look at that. Do we feel threatened because No, it doesn't bother us, right. and you don't want to disturb but it's not all that I believe in the press release it said so we will leverage this when it's out. and now, if you have structure data, like the Watson Data Platform But the goal is to really but do you see that so-called is where you want to apply is if you want to score so we want to help people, depending on it was really a pleasure having you back. from the Waldorf Astoria.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jean Francois | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
10-year | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Avi Mehta | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Jean Francois Puget | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
Last week | DATE | 0.99+ |
next quarter | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
one-time | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Five years ago | DATE | 0.99+ |
one word | QUANTITY | 0.99+ |
CICS | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
a year ago | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
next decade | DATE | 0.98+ |
one week | QUANTITY | 0.98+ |
first solution | QUANTITY | 0.98+ |
XGBoost | TITLE | 0.98+ |
a week | QUANTITY | 0.97+ |
Spark ML | TITLE | 0.97+ |
'60s | DATE | 0.97+ |
ModelBuilder | TITLE | 0.96+ |
one size | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
Watson Data Platform | TITLE | 0.93+ |
each time | QUANTITY | 0.93+ |
Kaggle | ORGANIZATION | 0.92+ |
Stu | PERSON | 0.91+ |
this quarter | DATE | 0.91+ |
DSX | TITLE | 0.89+ |
XGBoost | ORGANIZATION | 0.89+ |
Waldorf Astoria | ORGANIZATION | 0.86+ |
Spark ML. | TITLE | 0.85+ |
z/OS | TITLE | 0.82+ |
years | DATE | 0.8+ |
centuries | QUANTITY | 0.75+ |
10 years | QUANTITY | 0.75+ |
DSX | ORGANIZATION | 0.72+ |
Terminator | TITLE | 0.64+ |
XTC69X | TITLE | 0.63+ |
IBM Machine Learning Launch 2017 | EVENT | 0.63+ |
couple times | QUANTITY | 0.57+ |
machine learning | EVENT | 0.56+ |
X | TITLE | 0.56+ |
Watson | TITLE | 0.55+ |
these products | QUANTITY | 0.53+ |
-G-B | COMMERCIAL_ITEM | 0.53+ |
H20 | ORGANIZATION | 0.52+ |
TensorFlow | ORGANIZATION | 0.5+ |
theCUBE | ORGANIZATION | 0.49+ |
CUBE | ORGANIZATION | 0.37+ |