Andy Thurai, Constellation Research | CloudNativeSecurityCon 23
(upbeat music) (upbeat music) >> Hi everybody, welcome back to our coverage of the Cloud Native Security Con. I'm Dave Vellante, here in our Boston studio. We're connecting today with Palo Alto, with John Furrier and Lisa Martin. We're also live from the show floor in Seattle. But right now, I'm here with Andy Thurai who's from Constellation Research, friend of theCUBE, and we're going to discuss the intersection of AI and security, the potential of AI, the risks and the future. Andy, welcome, good to see you again. >> Good to be here again. >> Hey, so let's get into it, can you talk a little bit about, I know this is a passion of yours, the ethical considerations surrounding AI. I mean, it's front and center in the news, and you've got accountability, privacy, security, biases. Should we be worried about AI from a security perspective? >> Absolutely, man, you should be worried. See the problem is, people don't realize this, right? I mean, the ChatGPT being a new shiny object, it's all the craze that's about. But the problem is, most of the content that's produced either by ChatGPT or even by others, it's an access, no warranties, no accountability, no whatsoever. Particularly, if it is content, it's okay. But if it is something like a code that you use for example, one of their site projects that GitHub's co-pilot, which is actually, open AI + Microsoft + GitHub's combo, they allow you to produce code, AI writes code basically, right? But when you write code, problem with that is, it's not exactly stolen, but the models are created by using the GitHub code. Actually, they're getting sued for that, saying that, "You can't use our code". Actually there's a guy, Tim Davidson, I think he's named the professor, he actually demonstrated how AI produces exact copy of the code that he has written. So right now, it's a lot of security, accountability, privacy issues. Use it either to train or to learn. But in my view, it's not ready for enterprise grade yet. >> So, Brian Behlendorf today in his keynotes said he's really worried about ChatGPT being used to automate spearfishing. So I'm like, okay, so let's unpack that a little bit. Is the concern there that it just, the ChatGPT writes such compelling phishing content, it's going to increase the probability of somebody clicking on it, or are there other dimensions? >> It could, it's not necessarily just ChatGPT for that matter, right? AI can, actually, the hackers are using it to an extent already, can use to individualize content. For example, one of the things that you are able to easily identify when you're looking at the emails that are coming in, the phishing attack is, you look at some of the key elements in it, whether it's a human or even if it's an automated AI based system. They look at certain things and they say, "Okay, this is phishing". But if you were to read an email that looks exact copy of what I would've sent to you saying that, "Hey Dave, are you on for tomorrow? Or click on this link to do whatever. It could individualize the message. That's where the volume at scale to individual to masses, that can be done using AI, which is what scares me. >> Is there a flip side to AI? How is it being utilized to help cybersecurity? And maybe you could talk about some of the more successful examples of AI in security. Like, are there use cases or are there companies out there, Andy, that you find, I know you're close to a lot of firms that are leading in this area. You and I have talked about CrowdStrike, I know Palo Alto Network, so is there a positive side to this story? >> Yeah, I mean, absolutely right. Those are some of the good companies you mentioned, CrowdStrike, Palo Alto, Darktrace is another one that I closely follow, which is a good company as well, that they're using AI for security purposes. So, here's the thing, right, when people say, when they're using malware detection systems, most of the malware detection systems that are in today's security and malware systems, use some sort of a signature and pattern scanning in the malware. You know how many identified malwares are there today in the repository, in the library? More than a billion, a billion. So, if you are to check for every malware in your repository, that's not going to work. The pattern based recognition is not going to work. So, you got to figure out a different way of identification of pattern of usage, not just a signature in a malware, right? Or there are other areas you could use, things like the usage patterns. For example, if Andy is coming in to work at a certain time, you could combine a facial recognition saying, that should he be in here at that time, and should he be doing things, what he is supposed to be doing. There are a lot of things you could do using that, right? And the AIOps use cases, which is one of my favorite areas that I work, do a lot of work, right? That it has use cases for detecting things that are anomaly, that are not supposed to be done in a way that's supposed to be, reducing the noise so it can escalate only the things what you're supposed to. So, AIOps is a great use case to use in security areas which they're not using it to an extent yet. Incident management is another area. >> So, in your malware example, you're saying, okay, known malware, pretty much anybody can deal with that now. That's sort of yesterday's problem. >> The unknown is the problem. >> It's the unknown malware really trying to understand the patterns, and the patterns are going to change. It's not like you're saying a common signature 'cause they're going to use AI to change things up at scale. >> So, here's the problem, right? The malware writers are also using AI now, right? So, they're not going to write the old malware, send it to you. They are actually creating malware on the fly. It is possible entirely in today's world that they can create a malware, drop in your systems and it'll it look for the, let me get that name right. It's called, what are we using here? It's called the TTPs, Tactics, Techniques and procedures. It'll look for that to figure out, okay, am I doing the right pattern? And then malware can sense it saying that, okay, that's the one they're detecting. I'm going to change it on the fly. So, AI can code itself on the fly, rather malware can code itself on the fly, which is going to be hard to detect. >> Well, and when you talk about TTP, when you talk to folks like Kevin Mandia of Mandiant, recently purchased by Google or other of those, the ones that have the big observation space, they'll talk about the most malicious hacks that they see, involve lateral movement. So, that's obviously something that people are looking for, AI's looking for that. And of course, the hackers are going to try to mask that lateral movement, living off the land and other things. How do you see AI impacting the future of cyber? We talked about the risks and the good. One of the things that Brian Behlendorf also mentioned is that, he pointed out that in the early days of the internet, the protocols had an inherent element of trust involved. So, things like SMTP, they didn't have security built in. So, they built up a lot of technical debt. Do you see AI being able to help with that? What steps do you see being taken to ensure that AI based systems are secure? >> So, the major difference between the older systems and the newer systems is the older systems, sadly even today, a lot of them are rules-based. If it's a rules-based systems, you are dead in the water and not able, right? So, the AI-based systems can somewhat learn from the patterns as I was talking about, for example... >> When you say rules-based systems, you mean here's the policy, here's the rule, if it's not followed but then you're saying, AI will blow that away, >> AI will blow that away, you don't have to necessarily codify things saying that, okay, if this, then do this. You don't have to necessarily do that. AI can somewhat to an extent self-learn saying that, okay, if that doesn't happen, if this is not a pattern that I know which is supposed to happen, who should I escalate this to? Who does this system belong to? And the other thing, the AIOps use case we talked about, right, the anomalies. When an anomaly happens, then the system can closely look at, saying that, okay, this is not normal behavior or usage. Is that because system's being overused or is it because somebody's trying to access something, could look at the anomaly detection, anomaly prevention or even prediction to an extent. And that's where AI could be very useful. >> So, how about the developer angle? 'Cause CNCF, the event in Seattle is all around developers, how can AI be integrated? We did a lot of talk at the conference about shift-left, we talked about shift-left and protect right. Meaning, protect the run time. So, both are important, so what steps should be taken to ensure that the AI systems are being developed in a secure and ethically sound way? What's the role of developers in that regard? >> How long do you got? (Both laughing) I think it could go for base on that. So, here's the problem, right? Lot of these companies are trying to see, I mean, you might have seen that in the news that Buzzfeed is trying to hire all of the writers to create the thing that ChatGPT is creating, a lot of enterprises... >> How, they're going to fire their writers? >> Yeah, they replace the writers. >> It's like automated automated vehicles and automated Uber drivers. >> So, the problem is a lot of enterprises still haven't done that, at least the ones I'm speaking to, are thinking about saying, "Hey, you know what, can I replace my developers because they are so expensive? Can I replace them with AI generated code?" There are a few issues with that. One, AI generated code is based on some sort of a snippet of a code that has been already available. So, you get into copyright issues, that's issue number one, right? Issue number two, if AI creates code and if something were to go wrong, who's responsible for that? There's no accountability right now. Or you as a company that's creating a system that's responsible, or is it ChatGPT, Microsoft is responsible. >> Or is the developer? >> Or the developer. >> The individual developer might be. So, they're going to be cautious about that liability. >> Well, so one of the areas where I'm seeing a lot of enterprises using this is they are using it to teach developers to learn things. You know what, if you're to code, this is a good way to code. That area, it's okay because you are just teaching them. But if you are to put an actual production code, this is what I advise companies, look, if somebody's using even to create a code, whether with or without your permission, make sure that once the code is committed, you validate that the 100%, whether it's a code or a model, or even make sure that the data what you're feeding in it is completely out of bias or no bias, right? Because at the end of the day, it doesn't matter who, what, when did that, if you put out a service or a system out there, it is involving your company liability and system, and code in place. You're going to be screwed regardless of what, if something were to go wrong, you are the first person who's liable for it. >> Andy, when you think about the dangers of AI, and what keeps you up at night if you're a security professional AI and security professional. We talked about ChatGPT doing things, we don't even, the hackers are going to get creative. But what worries you the most when you think about this topic? >> A lot, a lot, right? Let's start off with an example, actually, I don't know if you had a chance to see that or not. The hackers used a bank of Hong Kong, used a defect mechanism to fool Bank of Hong Kong to transfer $35 million to a fake account, the money is gone, right? And the problem that is, what they did was, they interacted with a manager and they learned this executive who can control a big account and cloned his voice, and clone his patterns on how he calls and what he talks and the whole name he has, after learning that, they call the branch manager or bank manager and say, "Hey, you know what, hey, move this much money to whatever." So, that's one way of kind of phishing, kind of deep fake that can come. So, that's just one example. Imagine whether business is conducted by just using voice or phone calls itself. That's an area of concern if you were to do that. And imagine this became an uproar a few years back when deepfakes put out the video of Tom Cruise and others we talked about in the past, right? And Tom Cruise looked at the video, he said that he couldn't distinguish that he didn't do it. It is so close, that close, right? And they are doing things like they're using gems... >> Awesome Instagram account by the way, the guy's hilarious, right? >> So, they they're using a lot of this fake videos and fake stuff. As long as it's only for entertainment purposes, good. But imagine doing... >> That's right there but... >> But during the election season when people were to put out saying that, okay, this current president or ex-president, he said what? And the masses believe right now whatever they're seeing in TV, that's unfortunate thing. I mean, there's no fact checking involved, and you could change governments and elections using that, which is scary shit, right? >> When you think about 2016, that was when we really first saw, the weaponization of social, the heavy use of social and then 2020 was like, wow. >> To the next level. >> It was crazy. The polarization, 2024, would deepfakes... >> Could be the next level, yeah. >> I mean, it's just going to escalate. What about public policy? I want to pick your brain on this because I I've seen situations where the EU, for example, is going to restrict the ability to ship certain code if it's involved with critical infrastructure. So, let's say, example, you're running a nuclear facility and you've got the code that protects that facility, and it can be useful against some other malware that's outside of that country, but you're restricted from sending that for whatever reason, data sovereignty. Is public policy, is it aligned with the objectives in this new world? Or, I mean, normally they have to catch up. Is that going to be a problem in your view? >> It is because, when it comes to laws it's always miles behind when a new innovation happens. It's not just for AI, right? I mean, the same thing happened with IOT. Same thing happened with whatever else new emerging tech you have. The laws have to understand if there's an issue and they have to see a continued pattern of misuse of the technology, then they'll come up with that. Use in ways they are ahead of things. So, they put a lot of restrictions in place and about what AI can or cannot do, US is way behind on that, right? But California has done some things, for example, if you are talking to a chat bot, then you have to basically disclose that to the customer, saying that you're talking to a chat bot, not to a human. And that's just a very basic rule that they have in place. I mean, there are times that when a decision is made by the, problem is, AI is a black box now. The decision making is also a black box now, and we don't tell people. And the problem is if you tell people, you'll get sued immediately because every single time, we talked about that last time, there are cases involving AI making decisions, it gets thrown out the window all the time. If you can't substantiate that. So, the bottom line is that, yes, AI can assist and help you in making decisions but just use that as a assistant mechanism. A human has to be always in all the loop, right? >> Will AI help with, in your view, with supply chain, the software supply chain security or is it, it's always a balance, right? I mean, I feel like the attackers are more advanced in some ways, it's like they're on offense, let's say, right? So, when you're calling the plays, you know where you're going, the defense has to respond to it. So in that sense, the hackers have an advantage. So, what's the balance with software supply chain? Are the hackers have the advantage because they can use AI to accelerate their penetration of the software supply chain? Or will AI in your view be a good defensive mechanism? >> It could be but the problem is, the velocity and veracity of things can be done using AI, whether it's fishing, or malware, or other security and the vulnerability scanning the whole nine yards. It's scary because the hackers have a full advantage right now. And actually, I think ChatGPT recently put out two things. One is, it's able to direct the code if it is generated by ChatGPT. So basically, if you're trying to fake because a lot of schools were complaining about it, that's why they came up with the mechanism. So, if you're trying to create a fake, there's a mechanism for them to identify. But that's a step behind still, right? And the hackers are using things to their advantage. Actually ChatGPT made a rule, if you go there and read the terms and conditions, it's basically honor rule suggesting, you can't use this for certain purposes, to create a model where it creates a security threat, as that people are going to listen. So, if there's a way or mechanism to restrict hackers from using these technologies, that would be great. But I don't see that happening. So, know that these guys have an advantage, know that they're using AI, and you have to do things to be prepared. One thing I was mentioning about is, if somebody writes a code, if somebody commits a code right now, the problem is with the agile methodologies. If somebody writes a code, if they commit a code, you assume that's right and legit, you immediately push it out into production because need for speed is there, right? But if you continue to do that with the AI produced code, you're screwed. >> So, bottom line is, AI's going to speed us up in a security context or is it going to slow us down? >> Well, in the current version, the AI systems are flawed because even the ChatGPT, if you look at the the large language models, you look at the core piece of data that's available in the world as of today and then train them using that model, using the data, right? But people are forgetting that's based on today's data. The data changes on a second basis or on a minute basis. So, if I want to do something based on tomorrow or a day after, you have to retrain the models. So, the data already have a stale. So, that in itself is stale and the cost for retraining is going to be a problem too. So overall, AI is a good first step. Use that with a caution, is what I want to say. The system is flawed now, if you use it as is, you'll be screwed, it's dangerous. >> Andy, you got to go, thanks so much for coming in, appreciate it. >> Thanks for having me. >> You're very welcome, so we're going wall to wall with our coverage of the Cloud Native Security Con. I'm Dave Vellante in the Boston Studio, John Furrier, Lisa Martin and Palo Alto. We're going to be live on the show floor as well, bringing in keynote speakers and others on the ground. Keep it right there for more coverage on theCUBE. (upbeat music) (upbeat music) (upbeat music) (upbeat music)
SUMMARY :
and security, the potential of I mean, it's front and center in the news, of the code that he has written. that it just, the ChatGPT AI can, actually, the hackers are using it of the more successful So, here's the thing, So, in your malware the patterns, and the So, AI can code itself on the fly, that in the early days of the internet, So, the AI-based systems And the other thing, the AIOps use case that the AI systems So, here's the problem, right? and automated Uber drivers. So, the problem is a lot of enterprises So, they're going to be that the data what you're feeding in it about the dangers of AI, and the whole name he So, they they're using a lot And the masses believe right now whatever the heavy use of social and The polarization, 2024, would deepfakes... Is that going to be a And the problem is if you tell people, So in that sense, the And the hackers are using So, that in itself is stale and the cost Andy, you got to go, and others on the ground.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Davidson | PERSON | 0.99+ |
Brian Behlendorf | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Andy Thurai | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Kevin Mandia | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Tom Cruise | PERSON | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Darktrace | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
$35 million | QUANTITY | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Buzzfeed | ORGANIZATION | 0.99+ |
More than a billion, a billion | QUANTITY | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto Network | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Mandiant | ORGANIZATION | 0.99+ |
one example | QUANTITY | 0.99+ |
2024 | DATE | 0.99+ |
ChatGPT | ORGANIZATION | 0.98+ |
CloudNativeSecurityCon | EVENT | 0.98+ |
Bank of Hong Kong | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
ChatGPT | TITLE | 0.98+ |
yesterday | DATE | 0.98+ |
Constellation Research | ORGANIZATION | 0.97+ |
2020 | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
Both | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.94+ |
Hong Kong | LOCATION | 0.93+ |
one way | QUANTITY | 0.92+ |
Palo | ORGANIZATION | 0.92+ |
Cloud Native Security Con. | EVENT | 0.89+ |
nine yards | QUANTITY | 0.89+ |
CNCF | EVENT | 0.88+ |
AIOps | ORGANIZATION | 0.86+ |
first person | QUANTITY | 0.85+ |
California | ORGANIZATION | 0.78+ |
Issue number two | QUANTITY | 0.75+ |
deepfakes | ORGANIZATION | 0.74+ |
few years back | DATE | 0.74+ |
Boston Studio | LOCATION | 0.73+ |
Breaking Analysis: AI Goes Mainstream But ROI Remains Elusive
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> A decade of big data investments combined with cloud scale, the rise of much more cost effective processing power. And the introduction of advanced tooling has catapulted machine intelligence to the forefront of technology investments. No matter what job you have, your operation will be AI powered within five years and machines may actually even be doing your job. Artificial intelligence is being infused into applications, infrastructure, equipment, and virtually every aspect of our lives. AI is proving to be extremely helpful at things like controlling vehicles, speeding up medical diagnoses, processing language, advancing science, and generally raising the stakes on what it means to apply technology for business advantage. But business value realization has been a challenge for most organizations due to lack of skills, complexity of programming models, immature technology integration, sizable upfront investments, ethical concerns, and lack of business alignment. Mastering AI technology will not be a requirement for success in our view. However, figuring out how and where to apply AI to your business will be crucial. That means understanding the business case, picking the right technology partner, experimenting in bite-sized chunks, and quickly identifying winners to double down on from an investment standpoint. Hello and welcome to this week's Wiki-bond CUBE Insights powered by ETR. In this breaking analysis, we update you on the state of AI and what it means for the competition. And to do so, we invite into our studios Andy Thurai of Constellation Research. Andy covers AI deeply. He knows the players, he knows the pitfalls of AI investment, and he's a collaborator. Andy, great to have you on the program. Thanks for coming into our CUBE studios. >> Thanks for having me on. >> You're very welcome. Okay, let's set the table with a premise and a series of assertions we want to test with Andy. I'm going to lay 'em out. And then Andy, I'd love for you to comment. So, first of all, according to McKinsey, AI adoption has more than doubled since 2017, but only 10% of organizations report seeing significant ROI. That's a BCG and MIT study. And part of that challenge of AI is it requires data, is requires good data, data proficiency, which is not trivial, as you know. Firms that can master both data and AI, we believe are going to have a competitive advantage this decade. Hyperscalers, as we show you dominate AI and ML. We'll show you some data on that. And having said that, there's plenty of room for specialists. They need to partner with the cloud vendors for go to market productivity. And finally, organizations increasingly have to put data and AI at the center of their enterprises. And to do that, most are going to rely on vendor R&D to leverage AI and ML. In other words, Andy, they're going to buy it and apply it as opposed to build it. What are your thoughts on that setup and that premise? >> Yeah, I see that a lot happening in the field, right? So first of all, the only 10% of realizing a return on investment. That's so true because we talked about this earlier, the most companies are still in the innovation cycle. So they're trying to innovate and see what they can do to apply. A lot of these times when you look at the solutions, what they come up with or the models they create, the experimentation they do, most times they don't even have a good business case to solve, right? So they just experiment and then they figure it out, "Oh my God, this model is working. Can we do something to solve it?" So it's like you found a hammer and then you're trying to find the needle kind of thing, right? That never works. >> 'Cause it's cool or whatever it is. >> It is, right? So that's why, I always advise, when they come to me and ask me things like, "Hey, what's the right way to do it? What is the secret sauce?" And, we talked about this. The first thing I tell them is, "Find out what is the business case that's having the most amount of problems, that that can be solved using some of the AI use cases," right? Not all of them can be solved. Even after you experiment, do the whole nine yards, spend millions of dollars on that, right? And later on you make it efficient only by saving maybe $50,000 for the company or a $100,000 for the company, is it really even worth the experiment, right? So you got to start with the saying that, you know, where's the base for this happening? Where's the need? What's a business use case? It doesn't have to be about cost efficient and saving money in the existing processes. It could be a new thing. You want to bring in a new revenue stream, but figure out what is a business use case, how much money potentially I can make off of that. The same way that start-ups go after. Right? >> Yeah. Pretty straightforward. All right, let's take a look at where ML and AI fit relative to the other hot sectors of the ETR dataset. This XY graph shows net score spending velocity in the vertical axis and presence in the survey, they call it sector perversion for the October survey, the January survey's in the field. Then that squiggly line on ML/AI represents the progression. Since the January 21 survey, you can see the downward trajectory. And we position ML and AI relative to the other big four hot sectors or big three, including, ML/AI is four. Containers, cloud and RPA. These have consistently performed above that magic 40% red dotted line for most of the past two years. Anything above 40%, we think is highly elevated. And we've just included analytics and big data for context and relevant adjacentness, if you will. Now note that green arrow moving toward, you know, the 40% mark on ML/AI. I got a glimpse of the January survey, which is in the field. It's got more than a thousand responses already, and it's trending up for the current survey. So Andy, what do you make of this downward trajectory over the past seven quarters and the presumed uptick in the coming months? >> So one of the things you have to keep in mind is when the pandemic happened, it's about survival mode, right? So when somebody's in a survival mode, what happens, the luxury and the innovations get cut. That's what happens. And this is exactly what happened in the situation. So as you can see in the last seven quarters, which is almost dating back close to pandemic, everybody was trying to keep their operations alive, especially digital operations. How do I keep the lights on? That's the most important thing for them. So while the numbers spent on AI, ML is less overall, I still think the AI ML to spend to sort of like a employee experience or the IT ops, AI ops, ML ops, as we talked about, some of those areas actually went up. There are companies, we talked about it, Atlassian had a lot of platform issues till the amount of money people are spending on that is exorbitant and simply because they are offering the solution that was not available other way. So there are companies out there, you can take AoPS or incident management for that matter, right? A lot of companies have a digital insurance, they don't know how to properly manage it. How do you find an intern solve it immediately? That's all using AI ML and some of those areas actually growing unbelievable, the companies in that area. >> So this is a really good point. If you can you bring up that chart again, what Andy's saying is a lot of the companies in the ETR taxonomy that are doing things with AI might not necessarily show up in a granular fashion. And I think the other point I would make is, these are still highly elevated numbers. If you put on like storage and servers, they would read way, way down the list. And, look in the pandemic, we had to deal with work from home, we had to re-architect the network, we had to worry about security. So those are really good points that you made there. Let's, unpack this a little bit and look at the ML AI sector and the ETR data and specifically at the players and get Andy to comment on this. This chart here shows the same x y dimensions, and it just notes some of the players that are specifically have services and products that people spend money on, that CIOs and IT buyers can comment on. So the table insert shows how the companies are plotted, it's net score, and then the ends in the survey. And Andy, the hyperscalers are dominant, as you can see. You see Databricks there showing strong as a specialist, and then you got to pack a six or seven in there. And then Oracle and IBM, kind of the big whales of yester year are in the mix. And to your point, companies like Salesforce that you mentioned to me offline aren't in that mix, but they do a lot in AI. But what are your takeaways from that data? >> If you could put the slide back on please. I want to make quick comments on a couple of those. So the first one is, it's surprising other hyperscalers, right? As you and I talked about this earlier, AWS is more about logo blocks. We discussed that, right? >> Like what? Like a SageMaker as an example. >> We'll give you all the components what do you need. Whether it's MLOps component or whether it's, CodeWhisperer that we talked about, or a oral platform or data or data, whatever you want. They'll give you the blocks and then you'll build things on top of it, right? But Google took a different way. Matter of fact, if we did those numbers a few years ago, Google would've been number one because they did a lot of work with their acquisition of DeepMind and other things. They're way ahead of the pack when it comes to AI for longest time. Now, I think Microsoft's move of partnering and taking a huge competitor out would open the eyes is unbelievable. You saw that everybody is talking about chat GPI, right? And the open AI tool and ChatGPT rather. Remember as Warren Buffet is saying that, when my laundry lady comes and talk to me about stock market, it's heated up. So that's how it's heated up. Everybody's using ChatGPT. What that means is at the end of the day is they're creating, it's still in beta, keep in mind. It's not fully... >> Can you play with it a little bit? >> I have a little bit. >> I have, but it's good and it's not good. You know what I mean? >> Look, so at the end of the day, you take the massive text of all the available text in the world today, mass them all together. And then you ask a question, it's going to basically search through that and figure it out and answer that back. Yes, it's good. But again, as we discussed, if there's no business use case of what problem you're going to solve. This is building hype. But then eventually they'll figure out, for example, all your chats, online chats, could be aided by your AI chat bots, which is already there, which is not there at that level. This could build help that, right? Or the other thing we talked about is one of the areas where I'm more concerned about is that it is able to produce equal enough original text at the level that humans can produce, for example, ChatGPT or the equal enough, the large language transformer can help you write stories as of Shakespeare wrote it. Pretty close to it. It'll learn from that. So when it comes down to it, talk about creating messages, articles, blogs, especially during political seasons, not necessarily just in US, but anywhere for that matter. If people are able to produce at the emission speed and throw it at the consumers and confuse them, the elections can be won, the governments can be toppled. >> Because to your point about chatbots is chatbots have obviously, reduced the number of bodies that you need to support chat. But they haven't solved the problem of serving consumers. Most of the chat bots are conditioned response, which of the following best describes your problem? >> The current chatbot. >> Yeah. Hey, did we solve your problem? No. Is the answer. So that has some real potential. But if you could bring up that slide again, Ken, I mean you've got the hyperscalers that are dominant. You talked about Google and Microsoft is ubiquitous, they seem to be dominant in every ETR category. But then you have these other specialists. How do those guys compete? And maybe you could even, cite some of the guys that you know, how do they compete with the hyperscalers? What's the key there for like a C3 ai or some of the others that are on there? >> So I've spoken with at least two of the CEOs of the smaller companies that you have on the list. One of the things they're worried about is that if they continue to operate independently without being part of hyperscaler, either the hyperscalers will develop something to compete against them full scale, or they'll become irrelevant. Because at the end of the day, look, cloud is dominant. Not many companies are going to do like AI modeling and training and deployment the whole nine yards by independent by themselves. They're going to depend on one of the clouds, right? So if they're already going to be in the cloud, by taking them out to come to you, it's going to be extremely difficult issue to solve. So all these companies are going and saying, "You know what? We need to be in hyperscalers." For example, you could have looked at DataRobot recently, they made announcements, Google and AWS, and they are all over the place. So you need to go where the customers are. Right? >> All right, before we go on, I want to share some other data from ETR and why people adopt AI and get your feedback. So the data historically shows that feature breadth and technical capabilities were the main decision points for AI adoption, historically. What says to me that it's too much focus on technology. In your view, is that changing? Does it have to change? Will it change? >> Yes. Simple answer is yes. So here's the thing. The data you're speaking from is from previous years. >> Yes >> I can guarantee you, if you look at the latest data that's coming in now, those two will be a secondary and tertiary points. The number one would be about ROI. And how do I achieve? I've spent ton of money on all of my experiments. This is the same thing theme I'm seeing across when talking to everybody who's spending money on AI. I've spent so much money on it. When can I get it live in production? How much, how can I quickly get it? Because you know, the board is breathing down their neck. You already spend this much money. Show me something that's valuable. So the ROI is going to become, take it from me, I'm predicting this for 2023, that's going to become number one. >> Yeah, and if people focus on it, they'll figure it out. Okay. Let's take a look at some of the top players that won, some of the names we just looked at and double click on that and break down their spending profile. So the chart here shows the net score, how net score is calculated. So pay attention to the second set of bars that Databricks, who was pretty prominent on the previous chart. And we've annotated the colors. The lime green is, we're bringing the platform in new. The forest green is, we're going to spend 6% or more relative to last year. And the gray is flat spending. The pinkish is our spending's going to be down on AI and ML, 6% or worse. And the red is churn. So you don't want big red. You subtract the reds from the greens and you get net score, which is shown by those blue dots that you see there. So AWS has the highest net score and very little churn. I mean, single low single digit churn. But notably, you see Databricks and DataRobot are next in line within Microsoft and Google also, they've got very low churn. Andy, what are your thoughts on this data? >> So a couple of things that stands out to me. Most of them are in line with my conversation with customers. Couple of them stood out to me on how bad IBM Watson is doing. >> Yeah, bring that back up if you would. Let's take a look at that. IBM Watson is the far right and the red, that bright red is churning and again, you want low red here. Why do you think that is? >> Well, so look, IBM has been in the forefront of innovating things for many, many years now, right? And over the course of years we talked about this, they moved from a product innovation centric company into more of a services company. And over the years they were making, as at one point, you know that they were making about majority of that money from services. Now things have changed Arvind has taken over, he came from research. So he's doing a great job of trying to reinvent themselves as a company. But it's going to have a long way to catch up. IBM Watson, if you think about it, that played what, jeopardy and chess years ago, like 15 years ago? >> It was jaw dropping when you first saw it. And then they weren't able to commercialize that. >> Yeah. >> And you're making a good point. When Gerstner took over IBM at the time, John Akers wanted to split the company up. He wanted to have a database company, he wanted to have a storage company. Because that's where the industry trend was, Gerstner said no, he came from AMEX, right? He came from American Express. He said, "No, we're going to have a single throat to choke for the customer." They bought PWC for relatively short money. I think it was $15 billion, completely transformed and I would argue saved IBM. But the trade off was, it sort of took them out of product leadership. And so from Gerstner to Palmisano to Remedi, it was really a services led company. And I think Arvind is really bringing it back to a product company with strong consulting. I mean, that's one of the pillars. And so I think that's, they've got a strong story in data and AI. They just got to sort of bring it together and better. Bring that chart up one more time. I want to, the other point is Oracle, Oracle sort of has the dominant lock-in for mission critical database and they're sort of applying AI there. But to your point, they're really not an AI company in the sense that they're taking unstructured data and doing sort of new things. It's really about how to make Oracle better, right? >> Well, you got to remember, Oracle is about database for the structure data. So in yesterday's world, they were dominant database. But you know, if you are to start storing like videos and texts and audio and other things, and then start doing search of vector search and all that, Oracle is not necessarily the database company of choice. And they're strongest thing being apps and building AI into the apps? They are kind of surviving in that area. But again, I wouldn't name them as an AI company, right? But the other thing that that surprised me in that list, what you showed me is yes, AWS is number one. >> Bring that back up if you would, Ken. >> AWS is number one as you, it should be. But what what actually caught me by surprise is how DataRobot is holding, you know? I mean, look at that. The either net new addition and or expansion, DataRobot seem to be doing equally well, even better than Microsoft and Google. That surprises me. >> DataRobot's, and again, this is a function of spending momentum. So remember from the previous chart that Microsoft and Google, much, much larger than DataRobot. DataRobot more niche. But with spending velocity and has always had strong spending velocity, despite some of the recent challenges, organizational challenges. And then you see these other specialists, H2O.ai, Anaconda, dataiku, little bit of red showing there C3.ai. But these again, to stress are the sort of specialists other than obviously the hyperscalers. These are the specialists in AI. All right, so we hit the bigger names in the sector. Now let's take a look at the emerging technology companies. And one of the gems of the ETR dataset is the emerging technology survey. It's called ETS. They used to just do it like twice a year. It's now run four times a year. I just discovered it kind of mid-2022. And it's exclusively focused on private companies that are potential disruptors, they might be M&A candidates and if they've raised enough money, they could be acquirers of companies as well. So Databricks would be an example. They've made a number of investments in companies. SNEAK would be another good example. Companies that are private, but they're buyers, they hope to go IPO at some point in time. So this chart here, shows the emerging companies in the ML AI sector of the ETR dataset. So the dimensions of this are similar, they're net sentiment on the Y axis and mind share on the X axis. Basically, the ETS study measures awareness on the x axis and intent to do something with, evaluate or implement or not, on that vertical axis. So it's like net score on the vertical where negatives are subtracted from the positives. And again, mind share is vendor awareness. That's the horizontal axis. Now that inserted table shows net sentiment and the ends in the survey, which informs the position of the dots. And you'll notice we're plotting TensorFlow as well. We know that's not a company, but it's there for reference as open source tooling is an option for customers. And ETR sometimes like to show that as a reference point. Now we've also drawn a line for Databricks to show how relatively dominant they've become in the past 10 ETS surveys and sort of mind share going back to late 2018. And you can see a dozen or so other emerging tech vendors. So Andy, I want you to share your thoughts on these players, who were the ones to watch, name some names. We'll bring that data back up as you as you comment. >> So Databricks, as you said, remember we talked about how Oracle is not necessarily the database of the choice, you know? So Databricks is kind of trying to solve some of the issue for AI/ML workloads, right? And the problem is also there is no one company that could solve all of the problems. For example, if you look at the names in here, some of them are database names, some of them are platform names, some of them are like MLOps companies like, DataRobot (indistinct) and others. And some of them are like future based companies like, you know, the Techton and stuff. >> So it's a mix of those sub sectors? >> It's a mix of those companies. >> We'll talk to ETR about that. They'd be interested in your input on how to make this more granular and these sub-sectors. You got Hugging Face in here, >> Which is NLP, yeah. >> Okay. So your take, are these companies going to get acquired? Are they going to go IPO? Are they going to merge? >> Well, most of them going to get acquired. My prediction would be most of them will get acquired because look, at the end of the day, hyperscalers need these capabilities, right? So they're going to either create their own, AWS is very good at doing that. They have done a lot of those things. But the other ones, like for particularly Azure, they're going to look at it and saying that, "You know what, it's going to take time for me to build this. Why don't I just go and buy you?" Right? Or or even the smaller players like Oracle or IBM Cloud, this will exist. They might even take a look at them, right? So at the end of the day, a lot of these companies are going to get acquired or merged with others. >> Yeah. All right, let's wrap with some final thoughts. I'm going to make some comments Andy, and then ask you to dig in here. Look, despite the challenge of leveraging AI, you know, Ken, if you could bring up the next chart. We're not repeating, we're not predicting the AI winter of the 1990s. Machine intelligence. It's a superpower that's going to permeate every aspect of the technology industry. AI and data strategies have to be connected. Leveraging first party data is going to increase AI competitiveness and shorten time to value. Andy, I'd love your thoughts on that. I know you've got some thoughts on governance and AI ethics. You know, we talked about ChatGBT, Deepfakes, help us unpack all these trends. >> So there's so much information packed up there, right? The AI and data strategy, that's very, very, very important. If you don't have a proper data, people don't realize that AI is, your AI is the morals that you built on, it's predominantly based on the data what you have. It's not, AI cannot predict something that's going to happen without knowing what it is. It need to be trained, it need to understand what is it you're talking about. So 99% of the time you got to have a good data for you to train. So this where I mentioned to you, the problem is a lot of these companies can't afford to collect the real world data because it takes too long, it's too expensive. So a lot of these companies are trying to do the synthetic data way. It has its own set of issues because you can't use all... >> What's that synthetic data? Explain that. >> Synthetic data is basically not a real world data, but it's a created or simulated data equal and based on real data. It looks, feels, smells, taste like a real data, but it's not exactly real data, right? This is particularly useful in the financial and healthcare industry for world. So you don't have to, at the end of the day, if you have real data about your and my medical history data, if you redact it, you can still reverse this. It's fairly easy, right? >> Yeah, yeah. >> So by creating a synthetic data, there is no correlation between the real data and the synthetic data. >> So that's part of AI ethics and privacy and, okay. >> So the synthetic data, the issue with that is that when you're trying to commingle that with that, you can't create models based on just on synthetic data because synthetic data, as I said is artificial data. So basically you're creating artificial models, so you got to blend in properly that that blend is the problem. And you know how much of real data, how much of synthetic data you could use. You got to use judgment between efficiency cost and the time duration stuff. So that's one-- >> And risk >> And the risk involved with that. And the secondary issues which we talked about is that when you're creating, okay, you take a business use case, okay, you think about investing things, you build the whole thing out and you're trying to put it out into the market. Most companies that I talk to don't have a proper governance in place. They don't have ethics standards in place. They don't worry about the biases in data, they just go on trying to solve a business case >> It's wild west. >> 'Cause that's what they start. It's a wild west! And then at the end of the day when they are close to some legal litigation action or something or something else happens and that's when the Oh Shit! moments happens, right? And then they come in and say, "You know what, how do I fix this?" The governance, security and all of those things, ethics bias, data bias, de-biasing, none of them can be an afterthought. It got to start with the, from the get-go. So you got to start at the beginning saying that, "You know what, I'm going to do all of those AI programs, but before we get into this, we got to set some framework for doing all these things properly." Right? And then the-- >> Yeah. So let's go back to the key points. I want to bring up the cloud again. Because you got to get cloud right. Getting that right matters in AI to the points that you were making earlier. You can't just be out on an island and hyperscalers, they're going to obviously continue to do well. They get more and more data's going into the cloud and they have the native tools. To your point, in the case of AWS, Microsoft's obviously ubiquitous. Google's got great capabilities here. They've got integrated ecosystems partners that are going to continue to strengthen through the decade. What are your thoughts here? >> So a couple of things. One is the last mile ML or last mile AI that nobody's talking about. So that need to be attended to. There are lot of players in the market that coming up, when I talk about last mile, I'm talking about after you're done with the experimentation of the model, how fast and quickly and efficiently can you get it to production? So that's production being-- >> Compressing that time is going to put dollars in your pocket. >> Exactly. Right. >> So once, >> If you got it right. >> If you get it right, of course. So there are, there are a couple of issues with that. Once you figure out that model is working, that's perfect. People don't realize, the moment you decide that moment when the decision is made, it's like a new car. After you purchase the value decreases on a minute basis. Same thing with the models. Once the model is created, you need to be in production right away because it starts losing it value on a seconds minute basis. So issue number one, how fast can I get it over there? So your deployment, you are inferencing efficiently at the edge locations, your optimization, your security, all of this is at issue. But you know what is more important than that in the last mile? You keep the model up, you continue to work on, again, going back to the car analogy, at one point you got to figure out your car is costing more than to operate. So you got to get a new car, right? And that's the same thing with the models as well. If your model has reached a stage, it is actually a potential risk for your operation. To give you an idea, if Uber has a model, the first time when you get a car from going from point A to B cost you $60. If the model decayed the next time I might give you a $40 rate, I would take it definitely. But it's lost for the company. The business risk associated with operating on a bad model, you should realize it immediately, pull the model out, retrain it, redeploy it. That's is key. >> And that's got to be huge in security model recency and security to the extent that you can get real time is big. I mean you, you see Palo Alto, CrowdStrike, a lot of other security companies are injecting AI. Again, they won't show up in the ETR ML/AI taxonomy per se as a pure play. But ServiceNow is another company that you have have mentioned to me, offline. AI is just getting embedded everywhere. >> Yep. >> And then I'm glad you brought up, kind of real-time inferencing 'cause a lot of the modeling, if we can go back to the last point that we're going to make, a lot of the AI today is modeling done in the cloud. The last point we wanted to make here, I'd love to get your thoughts on this, is real-time AI inferencing for instance at the edge is going to become increasingly important for us. It's going to usher in new economics, new types of silicon, particularly arm-based. We've covered that a lot on "Breaking Analysis", new tooling, new companies and that could disrupt the sort of cloud model if new economics emerge. 'Cause cloud obviously very centralized, they're trying to decentralize it. But over the course of this decade we could see some real disruption there. Andy, give us your final thoughts on that. >> Yes and no. I mean at the end of the day, cloud is kind of centralized now, but a lot of this companies including, AWS is kind of trying to decentralize that by putting their own sub-centers and edge locations. >> Local zones, outposts. >> Yeah, exactly. Particularly the outpost concept. And if it can even become like a micro center and stuff, it won't go to the localized level of, I go to a single IOT level. But again, the cloud extends itself to that level. So if there is an opportunity need for it, the hyperscalers will figure out a way to fit that model. So I wouldn't too much worry about that, about deployment and where to have it and what to do with that. But you know, figure out the right business use case, get the right data, get the ethics and governance place and make sure they get it to production and make sure you pull the model out when it's not operating well. >> Excellent advice. Andy, I got to thank you for coming into the studio today, helping us with this "Breaking Analysis" segment. Outstanding collaboration and insights and input in today's episode. Hope we can do more. >> Thank you. Thanks for having me. I appreciate it. >> You're very welcome. All right. I want to thank Alex Marson who's on production and manages the podcast. Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and our newsletters. And Rob Hoof is our editor-in-chief over at Silicon Angle. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, all you got to do is search "Breaking Analysis" podcast. I publish each week on wikibon.com and silicon angle.com or you can email me at david.vellante@siliconangle.com to get in touch, or DM me at dvellante or comment on our LinkedIn posts. Please check out ETR.AI for the best survey data and the enterprise tech business, Constellation Research. Andy publishes there some awesome information on AI and data. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching everybody and we'll see you next time on "Breaking Analysis". (gentle closing tune plays)
SUMMARY :
bringing you data-driven Andy, great to have you on the program. and AI at the center of their enterprises. So it's like you found a of the AI use cases," right? I got a glimpse of the January survey, So one of the things and it just notes some of the players So the first one is, Like a And the open AI tool and ChatGPT rather. I have, but it's of all the available text of bodies that you need or some of the others that are on there? One of the things they're So the data historically So here's the thing. So the ROI is going to So the chart here shows the net score, Couple of them stood out to me IBM Watson is the far right and the red, And over the course of when you first saw it. I mean, that's one of the pillars. Oracle is not necessarily the how DataRobot is holding, you know? So it's like net score on the vertical database of the choice, you know? on how to make this more Are they going to go IPO? So at the end of the day, of the technology industry. So 99% of the time you What's that synthetic at the end of the day, and the synthetic data. So that's part of AI that blend is the problem. And the risk involved with that. So you got to start at data's going into the cloud So that need to be attended to. is going to put dollars the first time when you that you can get real time is big. a lot of the AI today is I mean at the end of the day, and make sure they get it to production Andy, I got to thank you for Thanks for having me. and manages the podcast.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Alex Marson | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Andy Thurai | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Tom Davenport | PERSON | 0.99+ |
AMEX | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Rashmi Kumar | PERSON | 0.99+ |
Rob Hoof | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Uber | ORGANIZATION | 0.99+ |
Ken | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
6% | QUANTITY | 0.99+ |
$40 | QUANTITY | 0.99+ |
January 21 | DATE | 0.99+ |
Chipotle | ORGANIZATION | 0.99+ |
$15 billion | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Rashmi | PERSON | 0.99+ |
$50,000 | QUANTITY | 0.99+ |
$60 | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
January | DATE | 0.99+ |
Antonio | PERSON | 0.99+ |
John Akers | PERSON | 0.99+ |
Warren Buffet | PERSON | 0.99+ |
late 2018 | DATE | 0.99+ |
Ikea | ORGANIZATION | 0.99+ |
American Express | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
PWC | ORGANIZATION | 0.99+ |
99% | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Domino | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
30 billion | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Gerstner | PERSON | 0.99+ |
120 billion | QUANTITY | 0.99+ |
$100,000 | QUANTITY | 0.99+ |
Laurence Pitt, Juniper Networks | RSAC USA 2020
>> Announcer: Live from San Francisco, it's theCUBE, covering RSA conference 2020 San Francisco, brought to you by SiliconANGLE Media. >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at the RSA 2020 show, here in Moscone in San Francisco, it's Thursday, we've been going wall to wall, we're really excited for our next guest. We've been talking about some kind of interesting topics, getting a little bit into the weeds, not on the technology, but some of the philosophical things that are happening in this industry that you should be thinking about. And we're excited welcome, Laurence Pitt, he is the cyber security strategist at Juniper Networks. Laurence, great to meet you. >> Thank you very much, hi. >> Yeah, so before we turn the cameras off, we've been talking about all kinds of fancy things, so let's just jump into it. One of the topics that gets a lot of news is deepfakes, and there's a lot of cute funny things out there of people's voices and things that they're saying not necessarily being where you expect them to be, but there's a real threat here, and a real kind of scary situation that just barely beginning to scratch the surface, I want you to get share some of your thoughts on deepfakes. >> I'm going to think you made a good point at the start. There's a lot of cute and funny stuff out there, there's a lot of fake political stuff you see. So is it seen as being humorous some people are sharing it a lot. But there is a darker side that's going to happen to deepfakes, because a lot of the things that you see today that go out on video, the reason that it is what it is, is because you're very familiar with the person that you're seeing in that video. Is a famous politician, is a movie star, and they're saying something that's out of character or funny and that's it. But what if that was actually the Chief Financial Officer of a major company, where the company appears to have launched a video, very close to the bell ringing on the stock market, that makes some kind of announcement about product or delay or something to do with their quarterly figures or something like that? You know that one minute video, could do a huge amount of damage to that organization. It could that somebody's looking to take advantage of a dip at that point, video goes out, their stocks going to dip, buy it out, then they could profit, but it all could also be much darker. It could be somebody who's trying to do that to actually damage their business. >> So, would you define a very good text base phishing spear phishing as a deepfake, where they've got enough data, where they're, the relevance of the topic is so spot on, the names that are involved in the text are so spot on 'cause they've done their homework, and the transactions that they're suggesting, are really spot on and consistent with the behavior of the things that their target does each and every day. >> So I'm not sure I defined that as a deepfake yet, obviously you've got two types of a phish, you've got a spear phish, which is the the perfected version, the work has gone into target, you as a specific, high value individual for some reason in your organization, but what we are seeing is in the same way that deepfakes are leveraging technology to be able to manipulate somebody, things like the fact that we're all on Instagram, we're all on Facebook, we're all on Twitter, means that social manipulation is a lot easier for the bad guys to be able to create, phishing campaigns that appear to be very much more targeted, they can create emails because they know you've got a dog. They know roughly where you live, because you're this information is coming up in pictures and it's a metro on the internet. And so they can generate automated messaging and emails and things that are going to go out. That will appear to be from whomever you expect to receive it from, using words that you think that only they would know about to make that appear to be more realistic. >> Right. >> And that's actually something, we sort of seen the start of that, but still the thing to spot is that the grammar is very often not very good in these if they haven't perfected the language side of it. >> But that's coming right, but that's coming right. >> But they all getting much more accurate yeah. >> We is an automated transcription service to do all the transcription on these videos. And you know, It's funny you can you can pay for the machine or you can pay for the human, we do both. But it's amazing, even only in the last six months to see the Delta shrink between the machine generated and the person generated. And this is even in, you know, pretty technical stuff that we get in very specific kind of vocabulary around the tech conferences that we cover. And the machines are catching up very, very fast. >> They very much are. but then if you think about, this is not new. What's happened, it's been happening in the background for a while things like quite a lot of legal work is done. If you look at a state agency, for example, conveyancing it's not uncommon for the conveyancing to be done using machine learning and using computer generated documentation because it's within a framework. But of course, the more it does that, the more that it learns. And then that software can more easily be applied to other other areas to be able to do that accurately. >> Right. So another big topic that gets a lot of conversation is passwords. You know, it's been going on forever, and now we're starting to get The two factor authentication, you know, the new Apple phones, you can look at it and identify it, you say now you have kind of biometrics. But that can all be hacked, too, right? It's just a slightly different, a slightly different method. But, you know, even those, the biometric is not at all. >> Well. >> That's secure. >> I think the thing is, you see that when you're logging into something, there's two pieces of information you need. There's there's what you are you as a person and then there's the thing that you know, a lot of people confuse biometrics, thinking of biometric authentication is their password, we're actually the biometric is is the them. And so you still should back things with strong passwords, you still should have that behind it. Because if somebody does get through the biometric that shouldn't automatically just give them access to absolutely everything. It's you know, these are technologies that are provided to make things easier to make it so that you can have less strong passwords so that so that you do know where you're storing information. But People over people tend to rely on them too much, it is still very, very important to use strong passwords to think about the process for how you want to do that. Taking statements and then turning those statements into strange sentences that only you understand maybe having your own code to do that conversion. So that you have a very strong password that nobody's ever going to pick up, right? We know that common passwords, unfortunately, are still 1234567 password, its horrific. >> I know, i saw some article that you're quoted in and it had the worst 25 passwords for 2018 and 2019. And it's basically just pick and pick a string. >> They just don't change. >> But you know, but it's interesting cause, you know, having a hard Prat, you know, it's easy to make, take the time and go ahead and create that, that that strong password. But then, you know, three months later. Salesforce keeps making me do a new one or the bank keeps making me do a new one. What's your opinion in some of these kind of password managers? Because to me, it seems like okay, well, I might be doing a great job creating some crazy passwords for the specific accounts. But what if I could hacked on that thing right now they have everything in the same a single place. >> Yeah. So this is where things like two factor authentication become really, really important. So I use passwords manager. And I've been I'm very, very careful with the how my passwords are created and what goes in there so that i know where certain passwords are created for certain types of account and certain complexities. But I also turned on two factor. And if somebody does try to go into my online password account, I will get an alert to say that they've tried to do that a single failed authentication and I will get an alert to say that they've done it an authentication that happens where I'm not I you know, then I will get a note say I've done that. So this is where there's that second factor actually becomes very important. If you have something that gives you the option to use two factor authentication. Use it. >> Use it. >> You know, it may, you know, we it is a pain when you're trying to do something with your credit card and you have to do One time text. But it'd be more of a pain if you didn't and somebody else was to use it. And to fill it up nicely for you wouldn't right. >> Right. You know, it's funny part of the keynote from Rowan was talking about, you know, as a profession, spending way too much time thinking about the most kind of crazy bizarre, sophisticated attacks. At the at the fault of, you know, not necessarily paying attention to the basics and the basics is where still a lot of the damage was done right. >> You know what? This is the thing and then there's, you know, there's a, there's a few things in our industry. So exactly what you just said. Everybody seems to believe that they're going to be the target of the next really big complex, major attack. The reality is they aren't. And the reality is that they've been hit by the basic slight ransomware, phishing spearphishing credential stuffing all these attacks are hitting them all the time. And so they need to have those foundational elements in place against those understanding what those are and not worry about the big stuff because the reality is if your organization is going to be hit by a nation state level complex attack. Or you can do fight against that as well, it's going to happen. And that's the thing with a lot of the buzzwords that we see in in cyber today as Matt. >> And and with smaller companies SMB's, I mean is really their only solution to go with, you know, cloud providers and other types of organizations and have the resources to get the people and the systems and the processes to really protect them because you can't expect you to just flowers down down off fourth street to be have any type of sophistication needed. But as soon as you plug that server in with a website, you're instantly going to get, get attacked , right. >> So the thing is, you can expect that, that guy to be an expert. He's not going to be an expert in cybersecurity and the cost of hiring someone is going to outweigh the value who's getting back. My recommendation that case is to look for organizations that can actually help you to become more cyber resilience. So an organization that I work with, it's actually UK and US basis, the global cyber alliance. They actually produce a small business toolkit. So it's a set of tools which are not chargeable is put together. And some of it might be a white paper, a set of recommendations, it might actually be a vendor developed tool that they can use to download to check the vulnerabilities or something like that. But what it does is it provides a framework for them. So they go through and say, Okay, yeah, I get this. This is English, simple language. And it helps to protect me as a small business owner, not a massive enterprise where actually none of those solutions fits what i one's to. So that's my recommendation to small businesses, look for these types of organization, work with someone like that, listen to what they're doing and learn cyber from them. >> Yeah, that's good tip. I want to, kind of of double click on that. So that makes sense when it's easy to measure your ROI on a small business. I just can't afford the security pros. >> Yeah. >> For bigger companies when they're doing their budgeting for security. To me, it's always a really interesting as i can, it's insurance at some point, you know, wouldn't be great if i could ensure 100% coverage, but we can't. And there's other needs in the business beyond just investing in, in cyber security, how should people think about the budgets relative to, as you just said, the value that they're trying to protect? How do you help people think about their cyber security budgets and allocations. >> So then there needs to be and this is happening, a change in how the conversation works between the security team and the board who own those budgets. What tends to happen today is that there's a cyber team wants to provide the right information to the board that's going to make them see how good what they're doing is and how successful they are and justifies the spend that they've made and also justifies the future investments that they're going to need to make. But very often, that falls back on reporting on big numbers, statistics, we blocked billions of threats. We turned away millions of pieces of malware. Actually, that conversation needs to narrow down and the team should be saying, Okay, so in the last two months, we had Five attacks that came in, we actually dealt with them by doing this, this is the changes that we've made, this is what we've learned. However, if we had had this additional or this switched on, then we would have been more successful or we'd have been faster or we could have turned down the time on doing that. Having that risk and compliance type conversation is actually adding value to the security solutions they've got and the board understand that they get that conversation, you're going to be happy to engage. This is happening, this is something that is happening. And it will, it's going to get better and better. But that's that's where things need to go. >> Right. Cause the other hard thing is it's kind of like we've joked earlier, it's kind of like an offensive lineman, they do a great job for 69 plays. And on the seventh seventh play, they get a holding call. That's all anybody sees . And you know, there's, again, that was part of robots, keynote that we can't necessarily brag about all the DDoS taxes that we stopped cause we can't let the bad guys kind of know where we're, we're being successful. So it's a little bit of a challenge in tryna show the ROI. Show the value when you can't necessarily raise your hand and say, hey, we stopped the 87. Tax. >> Yeah, >> Cause it's only the 88. That really is the one that that showed up in the Wall Street Journal. >> I think the thing with that is when organizations are looking at security solutions, specifically, we're very aware of that. As you know, organizations struggle to get customer references, you'll see a lot of the references are major financial, large manufacturing organization, because companies don't want to step up and say, I implemented security, they did this because the reverse of that is, she didn't have it before then >> Right right, or we'll go in that door not that door. >> Yeah and so, but there are a lot of good testing organizations out there that actually do take the security solutions, and run them through very, very stringent tests and then report back on the success of those tests. So you know, we work closely with NSX labs, for example, we've had some very good reports that have come out from there, where they do a drill down into how fast how much, how many, and then that's the kind of You can then take to the board. That's the kind of thing that you can publicize to say, the reason that we're using Juniper X or x firewalls is because in this report, this is what it said, this is how good that product was. And then you're not admitting a weakness. You're actually saying we're strong because we did this work in this research background. >> Right, very different kind of different approach. >> Yeah, yeah. >> Yeah well, Lawrence really enjoyed the conversation. We'll have to leave it here. But I think you have no shortage of job security, even though we will know everything in 2020 with the benefit of hindsight. >> Really, yeah thank you very much for that. >> All right. Thanks a lot. Alright, he's Lawrence. I'm Jeff. You're watching the cube. We're at RSA 2020 in Moscone. Thanks for watching. We'll see you next time.
SUMMARY :
brought to you by SiliconANGLE Media. that you should be thinking about. I want you to get share some of your thoughts on deepfakes. because a lot of the things that you see today of the things that their target does each and every day. for the bad guys to be able to create, but still the thing to spot But it's amazing, even only in the last six months to see But of course, the more it does that, to get The two factor authentication, you know, the new make things easier to make it so that you can have less I know, i saw some article that you're quoted in and it But you know, but it's interesting cause, you know, having where I'm not I you know, And to fill it up nicely for you wouldn't right. At the at the fault of, you know, not necessarily paying This is the thing and then there's, you know, their only solution to go with, you know, cloud providers So the thing is, you can expect that, I just can't afford the security pros. about the budgets relative to, as you just said, the value that they're going to need to make. Show the value when you can't necessarily raise your hand Cause it's only the 88. As you know, organizations struggle to get customer That's the kind of thing that you can publicize to say, But I think you have no shortage of job security, even We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laurence Pitt | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Lawrence | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Laurence | PERSON | 0.99+ |
Moscone | LOCATION | 0.99+ |
second factor | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Rowan | PERSON | 0.99+ |
69 plays | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
NSX | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
25 passwords | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
One time | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
RSA 2020 | EVENT | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
three months later | DATE | 0.99+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Wall Street Journal | TITLE | 0.98+ |
ORGANIZATION | 0.97+ | |
seventh seventh play | QUANTITY | 0.97+ |
Five attacks | QUANTITY | 0.97+ |
Matt | PERSON | 0.97+ |
millions of pieces | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
two factor | QUANTITY | 0.96+ |
single | QUANTITY | 0.95+ |
RSAC | ORGANIZATION | 0.95+ |
1234567 | OTHER | 0.94+ |
88 | QUANTITY | 0.94+ |
English | OTHER | 0.93+ |
RSA conference 2020 | EVENT | 0.92+ |
ORGANIZATION | 0.91+ | |
theCUBE | ORGANIZATION | 0.89+ |
last six months | DATE | 0.86+ |
last two months | DATE | 0.86+ |
billions of threats | QUANTITY | 0.85+ |
Salesforce | ORGANIZATION | 0.85+ |
each | QUANTITY | 0.85+ |
100% coverage | QUANTITY | 0.85+ |
2020 | ORGANIZATION | 0.81+ |
fourth street | QUANTITY | 0.74+ |
Juniper X | ORGANIZATION | 0.72+ |
USA | LOCATION | 0.68+ |
double | QUANTITY | 0.66+ |
deepfakes | TITLE | 0.63+ |
things | QUANTITY | 0.61+ |
Financial | PERSON | 0.58+ |
87 | OTHER | 0.57+ |
deepfakes | ORGANIZATION | 0.49+ |
Delta | TITLE | 0.46+ |
Naveen Rao, Intel | AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)
SUMMARY :
Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
20 watts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
10 million | QUANTITY | 0.99+ |
Naveen Rao | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
20 million | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Taiwan | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
100 radiologists | QUANTITY | 0.99+ |
Alan Turing | PERSON | 0.99+ |
Naveen | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
a month | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
two part | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one piece | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
Kate Darling | PERSON | 0.98+ |
early 2000s | DATE | 0.98+ |
two billion | QUANTITY | 0.98+ |
first smartphones | QUANTITY | 0.98+ |
one side | QUANTITY | 0.98+ |
Sands Convention Center | LOCATION | 0.97+ |
today | DATE | 0.97+ |
OpenVINO | TITLE | 0.97+ |
one radiologist | QUANTITY | 0.96+ |
Dr. | PERSON | 0.96+ |
16 year old | QUANTITY | 0.95+ |
two phases | QUANTITY | 0.95+ |
trillions of parameters | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
a million times | QUANTITY | 0.93+ |
seven years | QUANTITY | 0.93+ |
billions of observations | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.92+ |
one extreme | QUANTITY | 0.91+ |
two competing sides | QUANTITY | 0.9+ |
500 trillion model | QUANTITY | 0.9+ |
a year | QUANTITY | 0.89+ |
five | QUANTITY | 0.88+ |
each | QUANTITY | 0.88+ |
One area | QUANTITY | 0.88+ |
a couple of months ago | DATE | 0.85+ |
one sort | QUANTITY | 0.84+ |
two neural | QUANTITY | 0.82+ |
GANs | ORGANIZATION | 0.79+ |
couple of weeks | QUANTITY | 0.78+ |
DeepRacer | TITLE | 0.77+ |
millions of | QUANTITY | 0.76+ |
Photoshop | TITLE | 0.72+ |
deepfakes | ORGANIZATION | 0.72+ |
next few years | DATE | 0.71+ |
year | QUANTITY | 0.67+ |
re:Invent 2019 | EVENT | 0.66+ |
three | QUANTITY | 0.64+ |
Invent 2019 | EVENT | 0.64+ |
about | QUANTITY | 0.63+ |