Image Title

Search Results for C3:

Tom Siebel, C3 IoT | AWS re:Invent 2017


 

>> Narrator: Live, from Las Vegas, it's theCUBE, covering AWS re:Invent 2017, presented by AWS, intel, and our ecosystem of partners. Hello, everyone, welcome back to theCUBE. This is Silicon Angle's exclusive coverage with theCUBE, here at Amazon, re:Invent 2017. It's our 5th year covering Amazon's explosive growth. I'm John Furrier, the founder of Silicon Angle media. I'm here with Justin Warren, my cohost here, our next guest on set one is Tom Siebel, who is the founder and CEO of C3 IOT, industry legend, knows the software business, been around the block a few times, and now part of the new wave of innovation. Welcome to theCUBE. >> Thank you. >> I hear you just got in from San Francisco. What a world we're living in. You're at the front-end of your company that you founded and are running, an IOT big data play, doing extremely well. Even last year, the whisper in the hallway was C3 IOT is absolutely doing great, in the industrial side, certainly in the federal government side, and on commercial, congratulations! >> Thank you. >> What's the update, what's the secret formula? >> Well, we live at the convergence of elastic cloud computing, big data, AI, and IOT, and at the point where those converge, I think, is something called digital transformation, where you have these CEOs that, candidly, I think, they're concerned that companies are going through a mass-extinction event. I mean, companies are being, 52% of the Fortune 500 companies, as of 2000 are gone, right, they've disappeared and it's estimated as many 70% might disappear in the next 10 years, and we have this new species of companies with new DNA that look like Tesla and Uber, and Amazon, and, they have no drivers, no cars, and yet they own transportation, and I think that these CEOs are convinced that, unless they take advantage of this new class of technologies that they might be extinct. >> And it's certainly, we're seeing it, too, in a lot of the old guard, as Andy Jassy calls it, really talking about Oracle, IBM, and some of the other folks that are trying to do cloud, but they're winning. I gotta ask you, what's the main difference, from your perspective, that's different now that the culture of a company that's trying to transform, what's the big difference between the old way and new way now, that has to be implemented quickly, or extinction is a possibility? I mean, it's not just suppliers, it's the customers themselves. >> The customers have changed. >> What's the difference? >> So, this is my 4th decade in the information technology business and I've seen the business grow from a couple hundred billion to, say, two trillion worldwide, and I've seen it go from mainframe to mini-computers, to personal computers to the internet, all of that, and I was there when, in all of those generations of technology, when we brought those products to market, would come up in the organization, through the IT organization, to the CIO, and the CIO would say, "well, we're never gonna use a mini computer." or, "we're never gonna use relations database technology." or, "we're never gonna use a PC." And so, you'd wait for that CIO to be fired, then he'd come back two years later, right? Now, so meanwhile we build a two trillion dollar information technology business, globally. Now, what's happening in this space of big data, predictive analytics, IOT, is all of a sudden, it's the CEO at the table. CEO was never there before, and the CEO is mandating this thing called digital transformation, and he or she is appointing somebody in the person of a Chief Digital Officer, who has a mandate and basically a blank check to transform this company and get it done, and whereas it used to be the CIO would report to the CEO once a quarter at the quarterly off-site, the Chief Digital Officer reports to the CEO every week, so, and virtually everyone of our customers, CAT, John Deere, United Healthcare, you name, ENGIE, Enel, it's a CEO-driven initiative. >> You bring up a good point I wanna get your thoughts on, because the old way, and you mentioned, was IT reporting to the CIO. They ran things, they ran the business, they ran the plumbing, software was part of that, now software is the business. No one goes to the teller. The bank relationship's the software, or whatever vertical you're in there's now software, whether it's at the edge, whether it's data analytics, is the product to the consumer. So, the developer renaissance, we see software now changing, where the developer's now an influencer in this transformation. >> True. >> Not just, hey, go do it, and here's some tools, they're in part of that. Can you share your perspective on this because, if we're in a software renaissance, that means a whole new creativity's gonna unleash with software. With that role of the CDO, with the blank check, there's no dogma anymore. It's results. So, what's your perspective on this? >> Well, I think that there's enabling technologies that include the elastic cloud that include, computation and storage is basically free, right? Everything is a computer, so IOT, I used to think about IOT being devices, it's that IOT is a change in the form-factor of computers. In the future, everything's a computer, your eyeglasses, your watch, your heart monitor, your refrigerator, your pool pump, they're all computers, right, and then we have the network effect of Metcalfe's law, say we have 50 billion of theses devices fully connected and well, that's a pretty powerful network. Now, these technologies, in turn, enable AI, they enable machine learning and deep learning. Hey, that's a whole new ball game. Okay, we're able to solve classes of problems with predictive analytics and prescriptive analytics that were simply unsolvable before in history and this changes everything about the way we design products, the way we service customers, the way we manage companies. So, I think this AI thing is not to be underestimated. I think the cloud, IOT, big data, devices, those are just enablers, and I think AI is-- >> So, software and data's key, right? Data trains the AI, data is the fundamental new lifeblood. >> Big data, because now we're doing, what big data is about, people think that big data is the fact that an exabyte is more than a gigabyte, that's not it. Big data is about the fact that there is no sampling error. We have all the data. So, we used to, due to limitations to storage and processing we used to, you know, basically, take samples and infer results from those samples, and deal with it on the level of confidence error that was there. With big data, there's no sampling error. >> It's all there. >> It is a whole different game. >> We were talking before, and John, you mentioned before about the results that you need to show. Now, I know that you picked up a big new customer that I hope you can talk about publicly, which is a public-sector company, but that sounds like something where you're doing predictive maintenance for the Air Force, for the U.S. Air Force, so that's a big customer, good win there, but what is the result that they're actually getting from the use of big data and this machine learning analytics that you're doing? >> By aggregating all the telemetry and aggregating all their maintenance records, and aggregating all their pilot records, and then building machine learning class of ours, we can look at all the signals, and we can predict device failure or systems failure well in advance of failure, so the advantage is some pretty substantial percentages, say of F16s, will not deploy, of F18s will not deploy because, you know, they go to push the button and there's a system failure. Well, if we can predict system failure, I mean, the cost of maintenance goes down dramatically and, basically, it doubles the size of your fleet and, so the economic benefit is staggering. >> Tom, I gotta ask you a personal question. I mean, you've been through four decades, you're a legend in the industry, what was the itch that got you back with this company. Why did you found and run C3 IOT? What was the reason? Was it an itch you were scratching, like, damn, I want the action? I mean, what was the reason why you started the company? >> Well, I'm a computer scientist and out of graduate school, I went to work with a young entrepreneur by the name of Larry Ellison, turned out to be a pretty good idea, and then a decade later, we started Siebel Sytems, and I think, well, we did invent the CRM market and then it turned out to be a pretty good idea and I just see, at this intersection of these vectors we talked about, everything changes about computing. This has been a complete replacement market and I though, you know, there's opportunity to play a significant role in the game, and this what I do, you know. I collect talented people and try to build great companies and make customers satisfied. This is my idea of a good time. You're on the beach, you're on your board hangin' 10 on the big waves. What are the waves? We're seeing this inflection point, a lotta things comin' together, what are the waves that you're ridin' on right now? Obviously, the ones you mentioned, what's the set look like, if I can use a surfing analogy. What's coming in, what are the big waves? The two biggest ones are IOT and AI. I mean, since 2000 we've deployed 19 billion IOT sensors around the world. The next five years, we'll deploy 50 billion more. Everything will be a computer, and you connect all these things that they're all computing and apply AI, I mean we're gonna do things that were, you know, unthinkable, in terms of serving customers, building products, cost efficiencies, we're gonna revolutionize healthcare with precision health. Processes like energy extraction and power delivery will be much safer, much more reliable, much more environmentally-friendly, this is good stuff. So, what's your take on the security aspect of putting a computer in everything, because, I mean, the IT industry hasn't had a great track record of security, and now we're putting computers everywhere. As you say, they're gonna be in watches, they're gonna be in eyeglasses, what do you see as the trend in the way that security is gonna be addressed for this, computers everywhere? Well, I think that it is clearly not yet solved, okay, and it is a solvable problem. I believe that it's easier to secure data in cyber space than it is in your own data room. Maybe you could secure data in your data room when it took a forklift to move a storage device. It doesn't take a forklift anymore, right? It takes one of these little flash drives, you know, to move, to take all the data. So, I think the easiest place we can secure it is gonna be in cyber space. I think we'll use encryption, I think we'll be computing on encrypted data, and we haven't figured out algorithms to do that yet. I think blockchain will play an important role, but there's some invention that needs to happen and this is what we do. >> So, you like blockchain? >> I think blockchain plays a role in security. >> It does. So, I gotta ask you about the way, you're sinking your teeth into a new venture, exciting, it's on the cutting-edge, on the front lines of the innovation. There are a lotta other companies that are trying to retool. IBM, Microsoft, Oracle, if you were back them, probably not as exciting as what you're doing because you've got a new clean sheet of paper, but if you're Oracle, if you're Larry, and he went to be CTO, he's trying to transform, he's getting into the action, they got a lot to do there, IBM same thing, same with Microsoft, what's their strategy in your mind? If you were there, at the helm of those companies, what would you do? >> Well, number one, I would not bet against Larry. I know Larry pretty well and Larry is a formidable player in the information technology industry, and if you have to identify one of four companies that's surviving the long-run, it'll be Oracle that's in that consideration, in that set, so I think betting against Larry is a bad idea. >> He'll go to the mat big time, won't he? I mean, Jassy, there's barbs going back and forth, you gotta be careful there. >> Well, I mean, Andy Jassy is extraordinarily competent, I think, as it relates to this elastic cloud I think he's kinda got a lock on that, but, you know, IBM is hard to explain. I mean, IBM is a sad story. I think IBM is, there's some risk that IBM is the next Hewlett-Packard. I mean, they might be selling this thing off for piece parts this, you mean, if we look at the last 23 quarters, I mean, it's not good. >> And Microsoft's done a great job recently with Satya Nadella, and they're retooling fast. You can see them beavering away. >> But IBM, I mean, how do you bet against the cloud. I mean, are you kidding me? I mean, hello! IBM's a sad story. It's one of the world's great companies, it's an icon. If it fails, and companies like IBM's size do fail, I mean let's look at GE, that would be a sad state for America. >> Okay, on a more positive upbeat, what's next for you? Obviously, you're doing great, the numbers are good. Again, the rumors in the hallways we're hearing that you guys are doing great financially. Not sure if you can share any color on that, big wins, obviously, these are not little deals you're on, but what's next? What's the big innovation that you got comin' around the corner for C3 IOT. Well, so our business grew last year about 600%, this year it'll grow about 300%. We're a profitable, cash-positive business. Our average customer is, say, 20 to $200 billion business. We're engaged in very, very large transactions. In the last 18 months, we've done a lotta work in deep learning, okay. In the next 18 months, we'll do a lotta work in NLP. I think those technologies are hugely important. Technologically, this is where we'll be going. I think machine learning, traditional ML, we have that nailed, now we're exploiting deep learning in a big way using GPUs, and a lotta the work that Jensen Wang's doing at Nvidia, and now NLP, I think, is the next frontier for us. >> Final question for you, advice to other entrepreneurs. You're a serial entrepreneur. you've been very successful, inventive categories. You're looking at Amazon, how do you work with the Amazons of the world. What should entrepreneurs be thinking about in terms of how to enter the market, funding, just strategy in general. The rules have changed a little bit. What advice would you give the young entrepreneurs out there? >> Okay, become a domain expert at whatever domain you're proposing and whatever field you're gonna enter, and then surround yourself with people, whatever job they're doing, engineering, marketing, sales, F&A, who are better than you at what they do and, to the extent that I have succeeded, this is why I've succeeded. Now this might be easier for me than for others, but I try to surround myself with people who are better than me and, to the extent that I've been successful, that's why. >> We really appreciate you taking the time coming on. You're an inspiration, a serial entrepreneur, founder and CEO Tom Siebel of C3 IOT, hot company, big part of the Amazon Web Services ecosystem. Doing great stuff, again, serial entrepreneur. Great four-decade career. Thanks for coming on theCUBE, Tom Siebel. Here inside theCUBE, I'm John Furrier and Justin Warren, here in Las Vegas for AWS re:Invent. We'll be back with more live coverage after this short break. >> Thanks guys, good job.

Published Date : Nov 29 2017

SUMMARY :

and now part of the new wave of innovation. in the industrial side, and at the point where those converge, and some of the other folks that are and the CEO is mandating this thing because the old way, and you mentioned, was IT With that role of the CDO, with the blank check, it's that IOT is a change in the form-factor of computers. So, software and data's key, right? Big data is about the fact that there is no sampling error. and this machine learning analytics that you're doing? I mean, the cost of maintenance goes down dramatically I mean, what was the reason why you started the company? and this what I do, you know. exciting, it's on the cutting-edge, and if you have to identify I mean, Jassy, there's barbs going back and forth, I mean, they might be selling this thing off for piece parts with Satya Nadella, and they're retooling fast. I mean, are you kidding me? What's the big innovation that you got the young entrepreneurs out there? and whatever field you're gonna enter, hot company, big part of the Amazon Web Services ecosystem.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

Tom SiebelPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

LarryPERSON

0.99+

TeslaORGANIZATION

0.99+

UberORGANIZATION

0.99+

CATORGANIZATION

0.99+

United HealthcareORGANIZATION

0.99+

20QUANTITY

0.99+

San FranciscoLOCATION

0.99+

JassyPERSON

0.99+

John DeereORGANIZATION

0.99+

TomPERSON

0.99+

AWSORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

NvidiaORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Hewlett-PackardORGANIZATION

0.99+

last yearDATE

0.99+

GEORGANIZATION

0.99+

this yearDATE

0.99+

2000DATE

0.99+

Satya NadellaPERSON

0.99+

70%QUANTITY

0.99+

52%QUANTITY

0.99+

50 billionQUANTITY

0.99+

F18sCOMMERCIAL_ITEM

0.99+

Amazon Web ServicesORGANIZATION

0.99+

5th yearQUANTITY

0.99+

ENGIEORGANIZATION

0.99+

F16sCOMMERCIAL_ITEM

0.99+

Silicon AngleORGANIZATION

0.99+

two trillionQUANTITY

0.99+

oneQUANTITY

0.99+

U.S. Air ForceORGANIZATION

0.99+

AmazonsORGANIZATION

0.99+

about 600%QUANTITY

0.99+

about 300%QUANTITY

0.99+

two trillion dollarQUANTITY

0.98+

four decadesQUANTITY

0.98+

four-decadeQUANTITY

0.98+

Siebel SytemsORGANIZATION

0.98+

4th decadeQUANTITY

0.98+

two years laterDATE

0.98+

$200 billionQUANTITY

0.97+

Narrator: LiveTITLE

0.97+

a decade laterDATE

0.97+

AmericaLOCATION

0.96+

more than a gigabyteQUANTITY

0.96+

bigEVENT

0.95+

C3 IOTORGANIZATION

0.94+

two biggest onesQUANTITY

0.94+

EnelORGANIZATION

0.94+

MetcalfePERSON

0.92+

once a quarterQUANTITY

0.9+

next 18 monthsDATE

0.89+

50 billion moreQUANTITY

0.87+

last 18 monthsDATE

0.87+

next five yearsDATE

0.87+

four companiesQUANTITY

0.86+

Oracle Aspires to be the Netflix of AI | Cube Conversation


 

(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)

Published Date : Jan 24 2023

SUMMARY :

AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NetflixORGANIZATION

0.99+

OracleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Elad ZiklikPERSON

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

Safra CatzPERSON

0.99+

EladPERSON

0.99+

thousandsQUANTITY

0.99+

AnacondaORGANIZATION

0.99+

two partQUANTITY

0.99+

fourth seasonQUANTITY

0.99+

House of CardsTITLE

0.99+

LegoORGANIZATION

0.99+

second partQUANTITY

0.99+

GoogleORGANIZATION

0.99+

first seasonsQUANTITY

0.99+

SeinfeldTITLE

0.99+

Last monthDATE

0.99+

third seasonQUANTITY

0.99+

four hourQUANTITY

0.99+

last weekDATE

0.99+

HebrewOTHER

0.99+

Las VegasLOCATION

0.99+

last OctoberDATE

0.99+

OCIORGANIZATION

0.99+

three yearsQUANTITY

0.99+

bothQUANTITY

0.99+

two frontsQUANTITY

0.99+

first partQUANTITY

0.99+

Juan LozaPERSON

0.99+

FounderTITLE

0.99+

fourDATE

0.99+

six weeks agoDATE

0.99+

todayDATE

0.99+

two yearsQUANTITY

0.99+

pythonTITLE

0.99+

fiveQUANTITY

0.99+

a yearQUANTITY

0.99+

six months agoDATE

0.99+

two developersQUANTITY

0.99+

firstQUANTITY

0.98+

PythonTITLE

0.98+

H100sCOMMERCIAL_ITEM

0.98+

five years agoDATE

0.98+

oneQUANTITY

0.98+

FriendsTITLE

0.98+

one guyQUANTITY

0.98+

10QUANTITY

0.97+

Breaking Analysis: ChatGPT Won't Give OpenAI First Mover Advantage


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> OpenAI The company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional 10 billion dollars into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal, and the compute power that it could deploy to run its system. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT, and debate the premise that the company's early entry into the space may not confer winner take all advantage to OpenAI. And to do so, we welcome CUBE collaborator, alum, Sarbjeet Johal, (chuckles) and John Furrier, co-host of the Cube. Great to see you Sarbjeet, John. Really appreciate you guys coming to the program. >> Great to be on. >> Okay, so what is ChatGPT? Well, actually we asked ChatGPT, what is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It could be fine tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything to add? >> Yeah, think it did good. It's large language model, like previous models, but it started applying the transformers sort of mechanism to focus on what prompt you have given it to itself. And then also the what answer it gave you in the first, sort of, one sentence or two sentences, and then introspect on itself, like what I have already said to you. And so just work on that. So it it's self sort of focus if you will. It does, the transformers help the large language models to do that. >> So to your point, it's a large language model, and GPT stands for generative pre-trained transformer. >> And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with ChatGPT, it's not always accurate. It's actually a large language model, and it says state of the art language model. And if you look at Google, Google has dominated AI for many times and they're well known as being the best at this. And apparently Google has their own large language model, LLM, in play and have been holding it back to release because of backlash on the accuracy. Like just in that example you showed is a great point. They got almost right, but they missed the key word. >> You know what's funny about that John, is I had previously asked it in my prompt to give me it in less than a hundred words, and it was too long, I said I was too long for Breaking Analysis, and there it went into the fact that it's a large language model. So it largely, it gave me a really different answer the, for both times. So, but it's still pretty amazing for those of you who haven't played with it yet. And one of the best examples that I saw was Ben Charrington from This Week In ML AI podcast. And I stumbled on this thanks to Brian Gracely, who was listening to one of his Cloudcasts. Basically what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up 2X so it didn't sound like a machine. And voila, it was amazing. So John is ChatGPT going to take over as a cube host? >> Well, I was thinking, we get the questions in advance sometimes from PR people. We should actually just plug it in ChatGPT, add it to our notes, and saying, "Is this good enough for you? Let's ask the real question." So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier. It gets most of it right. So it's directionally correct and it feels like it's an answer, but it's not a hundred percent accurate. And I think that's where people are seeing value in it. Writing marketing, copy, brainstorming, guest list, gift list for somebody. Write me some lyrics to a song. Give me a thesis about healthcare policy in the United States. It'll do a bang up job, and then you got to go in and you can massage it. So we're going to do three quarters of the work. That's why plagiarism and schools are kind of freaking out. And that's why Microsoft put 10 billion in, because why wouldn't this be a feature of Word, or the OS to help it do stuff on behalf of the user. So linguistically it's a beautiful thing. You can input a string and get a good answer. It's not a search result. >> And we're going to get your take on on Microsoft and, but it kind of levels the playing- but ChatGPT writes better than I do, Sarbjeet, and I know you have some good examples too. You mentioned the Reed Hastings example. >> Yeah, I was listening to Reed Hastings fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format. And it was amazing what, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, like, just like we are talking, he was talking for like almost two minutes and then ChatGPT was answering. It was not one sentence question, and then a lot of answers from ChatGPT and yeah, you're right. I, this is our ability. I've been thinking deep about this since yesterday, we talked about, like, we want to do this segment. The data is fed into the data model. It can be the current data as well, but I think that, like, models like ChatGPT, other companies will have those too. They can, they're democratizing the intelligence, but they're not creating intelligence yet, definitely yet I can say that. They will give you all the finite answers. Like, okay, how do you do this for loop in Java, versus, you know, C sharp, and as a programmer you can do that, in, but they can't tell you that, how to write a new algorithm or write a new search algorithm for you. They cannot create a secretive code for you to- >> Not yet. >> Have competitive advantage. >> Not yet, not yet. >> but you- >> Can Google do that today? >> No one really can. The reasoning side of the data is, we talked about at our Supercloud event, with Zhamak Dehghani who's was CEO of, now of Nextdata. This next wave of data intelligence is going to come from entrepreneurs that are probably cross discipline, computer science and some other discipline. But they're going to be new things, for example, data, metadata, and data. It's hard to do reasoning like a human being, so that needs more data to train itself. So I think the first gen of this training module for the large language model they have is a corpus of text. Lot of that's why blog posts are, but the facts are wrong and sometimes out of context, because that contextual reasoning takes time, it takes intelligence. So machines need to become intelligent, and so therefore they need to be trained. So you're going to start to see, I think, a lot of acceleration on training the data sets. And again, it's only as good as the data you can get. And again, proprietary data sets will be a huge winner. Anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher will benefit from this. Large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just, it will eat that up, and turn that back into something better. So I think there's going to be a lot of interesting things to look at here. And certainly productivity's going to be off the charts for vanilla and the internet is going to get swarmed with vanilla content. So if you're in the content business, and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better. So I think there's so much at play Dave (indistinct). >> I think the playing field has been risen, so we- >> Risen and leveled? >> Yeah, and leveled to certain extent. So it's now like that few people as consumers, as consumers of AI, we will have a advantage and others cannot have that advantage. So it will be democratized. That's, I'm sure about that. But if you take the example of calculator, when the calculator came in, and a lot of people are, "Oh, people can't do math anymore because calculator is there." right? So it's a similar sort of moment, just like a calculator for the next level. But, again- >> I see it more like open source, Sarbjeet, because like if you think about what ChatGPT's doing, you do a query and it comes from somewhere the value of a post from ChatGPT is just a reuse of AI. The original content accent will be come from a human. So if I lay out a paragraph from ChatGPT, did some heavy lifting on some facts, I check the facts, save me about maybe- >> Yeah, it's productive. >> An hour writing, and then I write a killer two, three sentences of, like, sharp original thinking or critical analysis. I then took that body of work, open source content, and then laid something on top of it. >> And Sarbjeet's example is a good one, because like if the calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were this minority and you had an advantage over other drivers. Now Waze is like, you know, social traffic, you know, navigation, everybody had, you know- >> All the back roads are crowded. >> They're car crowded. (group laughs) Exactly. All right, let's, let's move on. What about this notion that futurist Ray Amara put forth and really Amara's Law that we're showing here, it's, the law is we, you know, "We tend to overestimate the effect of technology in the short run and underestimate it in the long run." Is that the case, do you think, with ChatGPT? What do you think Sarbjeet? >> I think that's true actually. There's a lot of, >> We don't debate this. >> There's a lot of awe, like when people see the results from ChatGPT, they say what, what the heck? Like, it can do this? But then if you use it more and more and more, and I ask the set of similar question, not the same question, and it gives you like same answer. It's like reading from the same bucket of text in, the interior read (indistinct) where the ChatGPT, you will see that in some couple of segments. It's very, it sounds so boring that the ChatGPT is coming out the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will have, go through this, you know, hype sort of cycle and get realistic with it. And then in the long term, I think it's a great thing in the short term, it's not something which will (indistinct) >> What's your counter point? You're saying it's not. >> I, no I think the question was, it's hyped up in the short term and not it's underestimated long term. That's what I think what he said, quote. >> Yes, yeah. That's what he said. >> Okay, I think that's wrong with this, because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it, I have been comparing to the internet, like the web, web browser Mosaic and Netscape, right, Navigator. I mean, I clearly still remember the days seeing Navigator for the first time, wow. And there weren't not many sites you could go to, everyone typed in, you know, cars.com, you know. >> That (indistinct) wasn't that overestimated, the overhyped at the beginning and underestimated. >> No, it was, it was underestimated long run, people thought. >> But that Amara's law. >> That's what is. >> No, they said overestimated? >> Overestimated near term underestimated- overhyped near term, underestimated long term. I got, right I mean? >> Well, I, yeah okay, so I would then agree, okay then- >> We were off the charts about the internet in the early days, and it actually exceeded our expectations. >> Well there were people who were, like, poo-pooing it early on. So when the browser came out, people were like, "Oh, the web's a toy for kids." I mean, in 1995 the web was a joke, right? So '96, you had online populations growing, so you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog then went to the web, kind of read only as you, as we always talk about. So I think that's a moment where the hype long term, the smart money, and the smart industry experts all get the long term. And in this case, there's more poo-pooing in the short term. "Ah, it's not a big deal, it's just AI." I've heard many people poo-pooing ChatGPT, and a lot of smart people saying, "No this is next gen, this is different and it's only going to get better." So I think people are estimating a big long game on this one. >> So you're saying it's bifurcated. There's those who say- >> Yes. >> Okay, all right, let's get to the heart of the premise, and possibly the debate for today's episode. Will OpenAI's early entry into the market confer sustainable competitive advantage for the company. And if you look at the history of tech, the technology industry, it's kind of littered with first mover failures. Altair, IBM, Tandy, Commodore, they and Apple even, they were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, you were just talking about, was all the rage in Silicon Valley, with the first browser, drove up all the housing prices out here. AltaVista was the first search engine to really, you know, index full text. >> Owned by Dell, I mean DEC. >> Owned by Digital. >> Yeah, Digital Equipment >> Compaq bought it. And of course as an aside, Digital, they wanted to showcase their hardware, right? Their super computer stuff. And then so Friendster and MySpace, they came before Facebook. The iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. >> You could say smartphone. So I mean. >> Well I know, and you can, we can parse this so we'll debate it. Now Twitter, you could argue, had first mover advantage. You kind of gave me that one John. Bitcoin and crypto clearly had first mover advantage, and sustaining that. Guys, will OpenAI make it to the list on the right with ChatGPT, what do you think? >> I think categorically as a company, it probably won't, but as a category, I think what they're doing will, so OpenAI as a company, they get funding, there's power dynamics involved. Microsoft put a billion dollars in early on, then they just pony it up. Now they're reporting 10 billion more. So, like, if the browsers, Microsoft had competitive advantage over Netscape, and used monopoly power, and convicted by the Department of Justice for killing Netscape with their monopoly, Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it, they're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So the AI as a company might not make the list, but the category of what this is, large language model AI, is probably will be on the right hand side. >> Okay, we're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That, it will basically set- put forth the premise that it, that ChatGPT, its early entry into the market will not confer competitive advantage to >> For OpenAI. >> To Open- Yeah, do you agree with that? >> I agree with that actually. It, because Google has been at it, and they have been holding back, as John said because of the scrutiny from the Fed, right, so- >> And privacy too. >> And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, you know, because it makes mistakes, and there are a lot of questions around the, sort of, where the content is coming from. You saw that as your example, it just stole the content, and without your permission, you know? >> Yeah. So as quick this aside- >> And it codes on people's behalf and the, those codes are wrong. So there's a lot of, sort of, false information it's putting out there. So it's a very vulnerable thing to do what Sam Altman- >> So even though it'll get better, others will compete. >> So look, just side note, a term which Reid Hoffman used a little bit. Like he said, it's experimental launch, like, you know, it's- >> It's pretty damn good. >> It is clever because according to Sam- >> It's more than clever. It's good. >> It's awesome, if you haven't used it. I mean you write- you read what it writes and you go, "This thing writes so well, it writes so much better than you." >> The human emotion drives that too. I think that's a big thing. But- >> I Want to add one more- >> Make your last point. >> Last one. Okay. So, but he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with StrictlyVC interview from yesterday with Sam Altman. Listen to that one it's an eye opening what they want- where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with Wall Street Journal. I think he was doing- >> You were not impressed. >> I was not impressed because he was pushing it too much. So Sam Altman's holding back so there's less backlash. >> Got 10 billion reasons to push. >> I think he's almost- >> Microsoft just laid off 10000 people. Hey ChatGPT, find me a job. You know like. (group laughs) >> He's overselling it to an extent that I think it will backfire on Microsoft. And he's over promising a lot of stuff right now, I think. I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, "Oh, this AI will write code for you and this and that." Like you called him out- >> The hyperbole- >> During your- >> from Satya Nadella, he's got a lot of hyperbole. (group talks over each other) >> All right, Let's, go ahead. >> Well, can I weigh in on the whole- >> Yeah, sure. >> Microsoft thing on whether OpenAI, here's the take on this. I think it's more like the browser moment to me, because I could relate to that experience with ChatG, personally, emotionally, when I saw that, and I remember vividly- >> You mean that aha moment (indistinct). >> Like this is obviously the future. Anything else in the old world is dead, website's going to be everywhere. It was just instant dot connection for me. And a lot of other smart people who saw this. Lot of people by the way, didn't see it. Someone said the web's a toy. At the company I was worked for at the time, Hewlett Packard, they like, they could have been in, they had invented HTML, and so like all this stuff was, like, they just passed, the web was just being passed over. But at that time, the browser got better, more websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable, if they leverage the category that they're creating, can get the scale. And if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So I mean, it should have- >> You're right, it was terrible, we kept coming back. >> The fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape Navigator, without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry, vis-a-vis say the CapEx of say a cloud company like AWS. You can't replicate that, Many have tried, but I think you can replicate OpenAI. >> And we're going to talk about that. Okay, so real quick, I want to bring in some ETR data. This isn't an ETR heavy segment, only because this so new, you know, they haven't coverage yet, but they do cover AI. So basically what we're seeing here is a slide on the vertical axis's net score, which is a measure of spending momentum, and in the horizontal axis's is presence in the dataset. Think of it as, like, market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is he like, you know, DataRobot and C3 AI and some others, but the big whales, Google, AWS, Microsoft, are really dominant in this market. So that's really the key takeaway that, can we- >> I notice IBM is way low. >> Yeah, IBM's low, and actually bring that back up and you, but then you see Oracle who actually is injecting. So I guess that's the other point is, you're not necessarily going to go buy AI, and you know, build your own AI, you're going to, it's going to be there and, it, Salesforce is going to embed it into its platform, the SaaS companies, and you're going to purchase AI. You're not necessarily going to build it. But some companies obviously are. >> I mean to quote IBM's general manager Rob Thomas, "You can't have AI with IA." information architecture and David Flynn- >> You can't Have AI without IA >> without, you can't have AI without IA. You can't have, if you have an Information Architecture, you then can power AI. Yesterday David Flynn, with Hammersmith, was on our Supercloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressablity, and Zhamak from Nextdata, she was pointing out that same thing. So the data problem factors into all this too, Dave. >> So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion. Google's code red, which was, you know, the headline in the New York Times. Of course Apple is there and several alternatives in the market today. Guys like Chinchilla, Bloom, and there's a company Jasper and several others, and then Lena Khan looms large and the government's around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape, they kind of really, the US government was way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, "Eh, we're going to be selling software in the enterprise anyway." and then, pshew, the company just dissipated. So, but it looks like the US government, especially with Lena Khan, they're changing the definition of antitrust and what the cause is to go after people, and they're really much more aggressive. It's only what, two years ago that (indistinct). >> Yeah, the problem I have with the federal oversight is this, they're always like late to the game, and they're slow to catch up. So in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days, and I think- >> But they're like 15 years late to that. >> Yeah, and now they got this new thing on top of it. So like, I just worry about them getting their fingers. >> But there's only two years, you know, OpenAI. >> No, but the thing (indistinct). >> No, they're still fighting other battles. But the problem with government is that they're going to label Big Tech as like a evil thing like Pharma, it's like smoke- >> You know Lena Khan wants to kill Big Tech, there's no question. >> So I think Big Tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech, is politically motivated in most cases. You can almost look at everything, and my 80 20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated, and the 20% is probably relevant, but off the mark and not organized. >> Well market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean you look at IBM's antitrust, that, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in. But you know, like you said, they missed the web, right, and >> so they were hanging on- >> There's nobody in government >> to Windows. >> that actually knows- >> And so, you, I think you're right. It's market forces that are going to determine this. But Sarbjeet, what do you make of Microsoft's big bet here, you weren't impressed with with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing, or is it going to be applied elsewhere? What do you think. >> They are saying that they will, sort of, weave this into their products, office products, productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your antitrust sort of comments, right? I believe the, your comment was like, oh, fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to they used to move. >> So two years is like 10 Years. >> Yeah, two years is like 10 years. Just want to make that point. (Dave laughs) This thing is going like wildfire. Any new tech which comes in that I think they're going against distribution channels. Lina Khan has commented time and again that the marketplace model is that she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. >> I don't, I don't see this, I don't see a Chat AI. >> You told me it's not Bing, you had an interesting comment. >> No, no. First of all, this is great from Microsoft. If you're Microsoft- >> Why? >> Because Microsoft doesn't have the AI chops that Google has, right? Google is got so much core competency on how they run their search, how they run their backends, their cloud, even though they don't get a lot of cloud market share in the enterprise, they got a kick ass cloud cause they needed one. >> Totally. >> They've invented SRE. I mean Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Cloud's a whole different story. Microsoft gets AI, they get a playbook, they get a product they can render into, the not only Bing, productivity software, helping people write papers, PowerPoint, also don't forget the cloud AI can super help. We had this conversation on our Supercloud event, where AI's going to do a lot of the heavy lifting around understanding observability and managing service meshes, to managing microservices, to turning on and off applications, and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. combined with their R and D budgets, they can then turbocharge more research, build on it. So I think this gives them a car in the game, Google may have pole position with AI, but this puts Microsoft right in the game, and they already have a lot of stuff going on. But this just, I mean everything gets lifted up. Security, cloud, productivity suite, everything. >> What's under the hood at Google, and why aren't they talking about it? I mean they got to be freaked out about this. No? Or do they have kind of a magic bullet? >> I think they have the, they have the chops definitely. Magic bullet, I don't know where they are, as compared to the ChatGPT 3 or 4 models. Like they, but if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, that's account you should look at if you are looking there, they have put all these distinctions what ChatGPT 3 has used, they have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of this segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. >> Obviously living in Palo Alto where the Google founders are, and Google's headquarters next town over we have- >> We're so close to them. We have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is, is that, from what I'm hearing from my sources, is Google has it, they don't want to release it for many reasons. One is it might screw up their search monopoly, one, two, they're worried about the accuracy, 'cause Google will get sued. 'Cause a lot of people are jamming on this ChatGPT as, "Oh it does everything for me." when it's clearly not a hundred percent accurate all the time. >> So Lina Kahn is looming, and so Google's like be careful. >> Yeah so Google's just like, this is the third, could be a third rail. >> But the first thing you said is a concern. >> Well no. >> The disruptive (indistinct) >> What they will do is do a Waymo kind of thing, where they spin out a separate company. >> They're doing that. >> The discussions happening, they're going to spin out the separate company and put it over there, and saying, "This is AI, got search over there, don't touch that search, 'cause that's where all the revenue is." (chuckles) >> So, okay, so that's how they deal with the Clay Christensen dilemma. What's the business model here? I mean it's not advertising, right? Is it to charge you for a query? What, how do you make money at this? >> It's a good question, I mean my thinking is, first of all, it's cool to type stuff in and see a paper get written, or write a blog post, or gimme a marketing slogan for this or that or write some code. I think the API side of the business will be critical. And I think Howie Xu, I know you're going to reference some of his comments yesterday on Supercloud, I think this brings a whole 'nother user interface into technology consumption. I think the business model, not yet clear, but it will probably be some sort of either API and developer environment or just a straight up free consumer product, with some sort of freemium backend thing for business. >> And he was saying too, it's natural language is the way in which you're going to interact with these systems. >> I think it's APIs, it's APIs, APIs, APIs, because these people who are cooking up these models, and it takes a lot of compute power to train these and to, for inference as well. Somebody did the analysis on the how many cents a Google search costs to Google, and how many cents the ChatGPT query costs. It's, you know, 100x or something on that. You can take a look at that. >> A 100x on which side? >> You're saying two orders of magnitude more expensive for ChatGPT >> Much more, yeah. >> Than for Google. >> It's very expensive. >> So Google's got the data, they got the infrastructure and they got, you're saying they got the cost (indistinct) >> No actually it's a simple query as well, but they are trying to put together the answers, and they're going through a lot more data versus index data already, you know. >> Let me clarify, you're saying that Google's version of ChatGPT is more efficient? >> No, I'm, I'm saying Google search results. >> Ah, search results. >> What are used to today, but cheaper. >> But that, does that, is that going to confer advantage to Google's large language (indistinct)? >> It will, because there were deep science (indistinct). >> Google, I don't think Google search is doing a large language model on their search, it's keyword search. You know, what's the weather in Santa Cruz? Or how, what's the weather going to be? Or you know, how do I find this? Now they have done a smart job of doing some things with those queries, auto complete, re direct navigation. But it's, it's not entity. It's not like, "Hey, what's Dave Vellante thinking this week in Breaking Analysis?" ChatGPT might get that, because it'll get your Breaking Analysis, it'll synthesize it. There'll be some, maybe some clips. It'll be like, you know, I mean. >> Well I got to tell you, I asked ChatGPT to, like, I said, I'm going to enter a transcript of a discussion I had with Nir Zuk, the CTO of Palo Alto Networks, And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him, and it just pulled a bunch of stuff that, and said, okay, here it is. It talked about Supercloud, it defined Supercloud. >> It's made, it makes you- >> Wow, But it was a big lie. It was fraudulent, but still, blew me away. >> Again, vanilla content and non accurate content. So we are going to see a surge of misinformation on steroids, but I call it the vanilla content. Wow, that's just so boring, (indistinct). >> There's so many dangers. >> Make your point, cause we got to, almost out of time. >> Okay, so the consumption, like how do you consume this thing. As humans, we are consuming it and we are, like, getting a nicely, like, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence, if you go out there. So that's one thing. The second thing is, we are as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is, but the ultimate usage of that will be when the machines can take that output from likes of ChatGPT, and do actions based on that. The robots can work, the robot can paint your house, we were talking about, right? Right now we can't do that. >> Data apps. >> So the data has to be ingested by the machines. It has to be digestible by the machines. And the machines cannot digest unorganized data right now, we will get better on the ingestion side as well. So we are getting better. >> Data, reasoning, insights, and action. >> I like that mall, paint my house. >> So, okay- >> By the way, that means drones that'll come in. Spray painting your house. >> Hey, it wasn't too long ago that robots couldn't climb stairs, as I like to point out. Okay, and of course it's no surprise the venture capitalists are lining up to eat at the trough, as I'd like to say. Let's hear, you'd referenced this earlier, John, let's hear what AI expert Howie Xu said at the Supercloud event, about what it takes to clone ChatGPT. Please, play the clip. >> So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest to get a, you know, another shot to the openAI sort of the level." You know, I did a (indistinct) >> Line up. >> A hundred million dollar is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So a hundred- >> Guys a hundred million dollars, that's an astoundingly low figure. What do you make of it? >> I was in an interview with, I was interviewing, I think he said hundred million or so, but in the hundreds of millions, not a billion right? >> You were trying to get him up, you were like "Hundreds of millions." >> Well I think, I- >> He's like, eh, not 10, not a billion. >> Well first of all, Howie Xu's an expert machine learning. He's at Zscaler, he's a machine learning AI guy. But he comes from VMware, he's got his technology pedigrees really off the chart. Great friend of theCUBE and kind of like a CUBE analyst for us. And he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly, the CapEx spending to build all the tech for the run a cloud. >> And you don't need a huge sales force. >> And in some case apps too, it's the same thing. But I think it's not that hard. >> But am I right about that? You don't need a huge sales force either. It's, what, you know >> If the product's good, it will sell, this is a new era. The better mouse trap will win. This is the new economics in software, right? So- >> Because you look at the amount of money Lacework, and Snyk, Snowflake, Databrooks. Look at the amount of money they've raised. I mean it's like a billion dollars before they get to IPO or more. 'Cause they need promotion, they need go to market. You don't need (indistinct) >> OpenAI's been working on this for multiple five years plus it's, hasn't, wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success, because he's trying to manage expectations, To your point Sarbjeet, earlier. It's like, yeah, he's trying to "Whoa, whoa, settle down everybody, (Dave laughs) it's not that great." because he doesn't want to fall into that, you know, hero and then get taken down, so. >> It may take a 100 million or 150 or 200 million to train the model. But to, for the inference to, yeah to for the inference machine, It will take a lot more, I believe. >> Give it, so imagine, >> Because- >> Go ahead, sorry. >> Go ahead. But because it consumes a lot more compute cycles and it's certain level of storage and everything, right, which they already have. So I think to compute is different. To frame the model is a different cost. But to run the business is different, because I think 100 million can go into just fighting the Fed. >> Well there's a flywheel too. >> Oh that's (indistinct) >> (indistinct) >> We are running the business, right? >> It's an interesting number, but it's also kind of, like, context to it. So here, a hundred million spend it, you get there, but you got to factor in the fact that the ways companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data, we're SiliconANGLE in theCUBE. We have proprietary content, CUBE videos, transcripts. Well wouldn't it be great if someone in a marketplace could sell a module for us, right? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy, not because of the software. >> They built the moat. >> Because you can, buy Reddit open source software and try To compete. >> They built the moat with their community. >> Their community, their scale, their user expectation. Twitter, we referenced earlier, that thing should have gone under the first two years, but there was such a great emotional product. People would tolerate the fail whale. And then, you know, well that was a whole 'nother thing. >> Then a plane landed in (John laughs) the Hudson and it was over. >> I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators, for- >> So you'll have many hundreds of millions of dollars investments that are going to be seeping out. If, all right, we got to wrap, if you had to put odds on it that that OpenAI is going to be the leader, maybe not a winner take all leader, but like you look at like Amazon and cloud, they're not winner take all, these aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position. >> If I'm 0 to 10 kind of thing? >> Yeah, it's like horse race, 3 to 1, 2 to 1, even money, 10 to 1, 50 to 1. >> Maybe 2 to 1, >> 2 to 1, that's pretty low odds. That's basically saying they're the favorite, they're the front runner. Would you agree with that? >> I'd say 4 to 1. >> Yeah, I was going to say I'm like a 5 to 1, 7 to 1 type of person, 'cause I'm a skeptic with, you know, there's so much competition, but- >> I think they're definitely the leader. I mean you got to say, I mean. >> Oh there's no question. There's no question about it. >> The question is can they execute? >> They're not Friendster, is what you're saying. >> They're not Friendster and they're more like Twitter and Reddit where they have momentum. If they can execute on the product side, and if they don't stumble on that, they will continue to have the lead. >> If they say stay neutral, as Sam is, has been saying, that, hey, Microsoft is one of our partners, if you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain, like by certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. (indistinct) >> Okay, the odds are dropping. (group talks over each other) That's a good point though >> Actually they might have done that to fend off the criticism of this. But it's really interesting to see the model they have adopted. >> The wildcard in all this, My last word on this is that, if there's a developer shift in how developers and data can come together again, we have conferences around the future of data, Supercloud and meshs versus, you know, how the data world, coding with data, how that evolves will also dictate, 'cause a wild card could be a shift in the landscape around how developers are using either machine learning or AI like techniques to code into their apps, so. >> That's fantastic insight. I can't thank you enough for your time, on the heels of Supercloud 2, really appreciate it. All right, thanks to John and Sarbjeet for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here, I'm jealous. And Noah, really appreciate it, Chuck, Andrew Frick and Cameron, Andrew Frick switching, Cameron on the video lake, great job. And Alex Myerson, he's on production, manages the podcast for us, Ken Schiffman as well. Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters. Rob Hof is our editor-in-chief over at SiliconANGLE, does some great editing, thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search Breaking Analysis podcast, wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch, email me directly, david.vellante@siliconangle.com or DM me at dvellante, or comment on our LinkedIn post. And by all means, check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, We'll see you next time on Breaking Analysis. (electronic music)

Published Date : Jan 20 2023

SUMMARY :

bringing you data-driven and ChatGPT have taken the world by storm. So I asked it, give it to the large language models to do that. So to your point, it's So one of the problems with ChatGPT, and he simply gave the system the prompts, or the OS to help it do but it kind of levels the playing- and the answers were coming as the data you can get. Yeah, and leveled to certain extent. I check the facts, save me about maybe- and then I write a killer because like if the it's, the law is we, you know, I think that's true and I ask the set of similar question, What's your counter point? and not it's underestimated long term. That's what he said. for the first time, wow. the overhyped at the No, it was, it was I got, right I mean? the internet in the early days, and it's only going to get better." So you're saying it's bifurcated. and possibly the debate the first mobile device. So I mean. on the right with ChatGPT, and convicted by the Department of Justice the scrutiny from the Fed, right, so- And the privacy and thing to do what Sam Altman- So even though it'll get like, you know, it's- It's more than clever. I mean you write- I think that's a big thing. I think he was doing- I was not impressed because You know like. And he did the same thing he's got a lot of hyperbole. the browser moment to me, So OpenAI could stay on the right side You're right, it was terrible, They could be the Netscape Navigator, and in the horizontal axis's So I guess that's the other point is, I mean to quote IBM's So the data problem factors and the government's around the world, and they're slow to catch up. Yeah, and now they got years, you know, OpenAI. But the problem with government to kill Big Tech, and the 20% is probably relevant, back in the day, right? are they going to apply it? and also to write code as well, that the marketplace I don't, I don't see you had an interesting comment. No, no. First of all, the AI chops that Google has, right? are off the scales, right? I mean they got to be and the capacity to process that data, on some of the thinking So Lina Kahn is looming, and this is the third, could be a third rail. But the first thing What they will do out the separate company Is it to charge you for a query? it's cool to type stuff in natural language is the way and how many cents the and they're going through Google search results. It will, because there were It'll be like, you know, I mean. I never input the transcript. Wow, But it was a big lie. but I call it the vanilla content. Make your point, cause we And on the danger side as well, So the data By the way, that means at the Supercloud event, So one of the VCs actually What do you make of it? you were like "Hundreds of millions." not 10, not a billion. Clearly, the CapEx spending to build all But I think it's not that hard. It's, what, you know This is the new economics Look at the amount of And Sam is depositioning all the success, or 150 or 200 million to train the model. So I think to compute is different. not because of the software. Because you can, buy They built the moat And then, you know, well that the Hudson and it was over. that are going to be seeping out. Yeah, it's like horse race, 3 to 1, 2 to 1, that's pretty low odds. I mean you got to say, I mean. Oh there's no question. is what you're saying. and if they don't stumble on that, the money back to the public, to the, Okay, the odds are dropping. the model they have adopted. Supercloud and meshs versus, you know, on the heels of Supercloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

SarbjeetPERSON

0.99+

Brian GracelyPERSON

0.99+

Lina KhanPERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Reid HoffmanPERSON

0.99+

Alex MyersonPERSON

0.99+

Lena KhanPERSON

0.99+

Sam AltmanPERSON

0.99+

AppleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

GoogleORGANIZATION

0.99+

David FlynnPERSON

0.99+

SamPERSON

0.99+

NoahPERSON

0.99+

Ray AmaraPERSON

0.99+

10 billionQUANTITY

0.99+

150QUANTITY

0.99+

Rob HofPERSON

0.99+

ChuckPERSON

0.99+

Palo AltoLOCATION

0.99+

Howie XuPERSON

0.99+

AndersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

John FurrierPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

1995DATE

0.99+

Lina KahnPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

50 wordsQUANTITY

0.99+

Hundreds of millionsQUANTITY

0.99+

CompaqORGANIZATION

0.99+

10QUANTITY

0.99+

Kristen MartinPERSON

0.99+

two sentencesQUANTITY

0.99+

DavePERSON

0.99+

hundreds of millionsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

CameronPERSON

0.99+

100 millionQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

one sentenceQUANTITY

0.99+

10 millionQUANTITY

0.99+

yesterdayDATE

0.99+

Clay ChristensenPERSON

0.99+

Sarbjeet JohalPERSON

0.99+

NetscapeORGANIZATION

0.99+

Breaking Analysis: AI Goes Mainstream But ROI Remains Elusive


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> A decade of big data investments combined with cloud scale, the rise of much more cost effective processing power. And the introduction of advanced tooling has catapulted machine intelligence to the forefront of technology investments. No matter what job you have, your operation will be AI powered within five years and machines may actually even be doing your job. Artificial intelligence is being infused into applications, infrastructure, equipment, and virtually every aspect of our lives. AI is proving to be extremely helpful at things like controlling vehicles, speeding up medical diagnoses, processing language, advancing science, and generally raising the stakes on what it means to apply technology for business advantage. But business value realization has been a challenge for most organizations due to lack of skills, complexity of programming models, immature technology integration, sizable upfront investments, ethical concerns, and lack of business alignment. Mastering AI technology will not be a requirement for success in our view. However, figuring out how and where to apply AI to your business will be crucial. That means understanding the business case, picking the right technology partner, experimenting in bite-sized chunks, and quickly identifying winners to double down on from an investment standpoint. Hello and welcome to this week's Wiki-bond CUBE Insights powered by ETR. In this breaking analysis, we update you on the state of AI and what it means for the competition. And to do so, we invite into our studios Andy Thurai of Constellation Research. Andy covers AI deeply. He knows the players, he knows the pitfalls of AI investment, and he's a collaborator. Andy, great to have you on the program. Thanks for coming into our CUBE studios. >> Thanks for having me on. >> You're very welcome. Okay, let's set the table with a premise and a series of assertions we want to test with Andy. I'm going to lay 'em out. And then Andy, I'd love for you to comment. So, first of all, according to McKinsey, AI adoption has more than doubled since 2017, but only 10% of organizations report seeing significant ROI. That's a BCG and MIT study. And part of that challenge of AI is it requires data, is requires good data, data proficiency, which is not trivial, as you know. Firms that can master both data and AI, we believe are going to have a competitive advantage this decade. Hyperscalers, as we show you dominate AI and ML. We'll show you some data on that. And having said that, there's plenty of room for specialists. They need to partner with the cloud vendors for go to market productivity. And finally, organizations increasingly have to put data and AI at the center of their enterprises. And to do that, most are going to rely on vendor R&D to leverage AI and ML. In other words, Andy, they're going to buy it and apply it as opposed to build it. What are your thoughts on that setup and that premise? >> Yeah, I see that a lot happening in the field, right? So first of all, the only 10% of realizing a return on investment. That's so true because we talked about this earlier, the most companies are still in the innovation cycle. So they're trying to innovate and see what they can do to apply. A lot of these times when you look at the solutions, what they come up with or the models they create, the experimentation they do, most times they don't even have a good business case to solve, right? So they just experiment and then they figure it out, "Oh my God, this model is working. Can we do something to solve it?" So it's like you found a hammer and then you're trying to find the needle kind of thing, right? That never works. >> 'Cause it's cool or whatever it is. >> It is, right? So that's why, I always advise, when they come to me and ask me things like, "Hey, what's the right way to do it? What is the secret sauce?" And, we talked about this. The first thing I tell them is, "Find out what is the business case that's having the most amount of problems, that that can be solved using some of the AI use cases," right? Not all of them can be solved. Even after you experiment, do the whole nine yards, spend millions of dollars on that, right? And later on you make it efficient only by saving maybe $50,000 for the company or a $100,000 for the company, is it really even worth the experiment, right? So you got to start with the saying that, you know, where's the base for this happening? Where's the need? What's a business use case? It doesn't have to be about cost efficient and saving money in the existing processes. It could be a new thing. You want to bring in a new revenue stream, but figure out what is a business use case, how much money potentially I can make off of that. The same way that start-ups go after. Right? >> Yeah. Pretty straightforward. All right, let's take a look at where ML and AI fit relative to the other hot sectors of the ETR dataset. This XY graph shows net score spending velocity in the vertical axis and presence in the survey, they call it sector perversion for the October survey, the January survey's in the field. Then that squiggly line on ML/AI represents the progression. Since the January 21 survey, you can see the downward trajectory. And we position ML and AI relative to the other big four hot sectors or big three, including, ML/AI is four. Containers, cloud and RPA. These have consistently performed above that magic 40% red dotted line for most of the past two years. Anything above 40%, we think is highly elevated. And we've just included analytics and big data for context and relevant adjacentness, if you will. Now note that green arrow moving toward, you know, the 40% mark on ML/AI. I got a glimpse of the January survey, which is in the field. It's got more than a thousand responses already, and it's trending up for the current survey. So Andy, what do you make of this downward trajectory over the past seven quarters and the presumed uptick in the coming months? >> So one of the things you have to keep in mind is when the pandemic happened, it's about survival mode, right? So when somebody's in a survival mode, what happens, the luxury and the innovations get cut. That's what happens. And this is exactly what happened in the situation. So as you can see in the last seven quarters, which is almost dating back close to pandemic, everybody was trying to keep their operations alive, especially digital operations. How do I keep the lights on? That's the most important thing for them. So while the numbers spent on AI, ML is less overall, I still think the AI ML to spend to sort of like a employee experience or the IT ops, AI ops, ML ops, as we talked about, some of those areas actually went up. There are companies, we talked about it, Atlassian had a lot of platform issues till the amount of money people are spending on that is exorbitant and simply because they are offering the solution that was not available other way. So there are companies out there, you can take AoPS or incident management for that matter, right? A lot of companies have a digital insurance, they don't know how to properly manage it. How do you find an intern solve it immediately? That's all using AI ML and some of those areas actually growing unbelievable, the companies in that area. >> So this is a really good point. If you can you bring up that chart again, what Andy's saying is a lot of the companies in the ETR taxonomy that are doing things with AI might not necessarily show up in a granular fashion. And I think the other point I would make is, these are still highly elevated numbers. If you put on like storage and servers, they would read way, way down the list. And, look in the pandemic, we had to deal with work from home, we had to re-architect the network, we had to worry about security. So those are really good points that you made there. Let's, unpack this a little bit and look at the ML AI sector and the ETR data and specifically at the players and get Andy to comment on this. This chart here shows the same x y dimensions, and it just notes some of the players that are specifically have services and products that people spend money on, that CIOs and IT buyers can comment on. So the table insert shows how the companies are plotted, it's net score, and then the ends in the survey. And Andy, the hyperscalers are dominant, as you can see. You see Databricks there showing strong as a specialist, and then you got to pack a six or seven in there. And then Oracle and IBM, kind of the big whales of yester year are in the mix. And to your point, companies like Salesforce that you mentioned to me offline aren't in that mix, but they do a lot in AI. But what are your takeaways from that data? >> If you could put the slide back on please. I want to make quick comments on a couple of those. So the first one is, it's surprising other hyperscalers, right? As you and I talked about this earlier, AWS is more about logo blocks. We discussed that, right? >> Like what? Like a SageMaker as an example. >> We'll give you all the components what do you need. Whether it's MLOps component or whether it's, CodeWhisperer that we talked about, or a oral platform or data or data, whatever you want. They'll give you the blocks and then you'll build things on top of it, right? But Google took a different way. Matter of fact, if we did those numbers a few years ago, Google would've been number one because they did a lot of work with their acquisition of DeepMind and other things. They're way ahead of the pack when it comes to AI for longest time. Now, I think Microsoft's move of partnering and taking a huge competitor out would open the eyes is unbelievable. You saw that everybody is talking about chat GPI, right? And the open AI tool and ChatGPT rather. Remember as Warren Buffet is saying that, when my laundry lady comes and talk to me about stock market, it's heated up. So that's how it's heated up. Everybody's using ChatGPT. What that means is at the end of the day is they're creating, it's still in beta, keep in mind. It's not fully... >> Can you play with it a little bit? >> I have a little bit. >> I have, but it's good and it's not good. You know what I mean? >> Look, so at the end of the day, you take the massive text of all the available text in the world today, mass them all together. And then you ask a question, it's going to basically search through that and figure it out and answer that back. Yes, it's good. But again, as we discussed, if there's no business use case of what problem you're going to solve. This is building hype. But then eventually they'll figure out, for example, all your chats, online chats, could be aided by your AI chat bots, which is already there, which is not there at that level. This could build help that, right? Or the other thing we talked about is one of the areas where I'm more concerned about is that it is able to produce equal enough original text at the level that humans can produce, for example, ChatGPT or the equal enough, the large language transformer can help you write stories as of Shakespeare wrote it. Pretty close to it. It'll learn from that. So when it comes down to it, talk about creating messages, articles, blogs, especially during political seasons, not necessarily just in US, but anywhere for that matter. If people are able to produce at the emission speed and throw it at the consumers and confuse them, the elections can be won, the governments can be toppled. >> Because to your point about chatbots is chatbots have obviously, reduced the number of bodies that you need to support chat. But they haven't solved the problem of serving consumers. Most of the chat bots are conditioned response, which of the following best describes your problem? >> The current chatbot. >> Yeah. Hey, did we solve your problem? No. Is the answer. So that has some real potential. But if you could bring up that slide again, Ken, I mean you've got the hyperscalers that are dominant. You talked about Google and Microsoft is ubiquitous, they seem to be dominant in every ETR category. But then you have these other specialists. How do those guys compete? And maybe you could even, cite some of the guys that you know, how do they compete with the hyperscalers? What's the key there for like a C3 ai or some of the others that are on there? >> So I've spoken with at least two of the CEOs of the smaller companies that you have on the list. One of the things they're worried about is that if they continue to operate independently without being part of hyperscaler, either the hyperscalers will develop something to compete against them full scale, or they'll become irrelevant. Because at the end of the day, look, cloud is dominant. Not many companies are going to do like AI modeling and training and deployment the whole nine yards by independent by themselves. They're going to depend on one of the clouds, right? So if they're already going to be in the cloud, by taking them out to come to you, it's going to be extremely difficult issue to solve. So all these companies are going and saying, "You know what? We need to be in hyperscalers." For example, you could have looked at DataRobot recently, they made announcements, Google and AWS, and they are all over the place. So you need to go where the customers are. Right? >> All right, before we go on, I want to share some other data from ETR and why people adopt AI and get your feedback. So the data historically shows that feature breadth and technical capabilities were the main decision points for AI adoption, historically. What says to me that it's too much focus on technology. In your view, is that changing? Does it have to change? Will it change? >> Yes. Simple answer is yes. So here's the thing. The data you're speaking from is from previous years. >> Yes >> I can guarantee you, if you look at the latest data that's coming in now, those two will be a secondary and tertiary points. The number one would be about ROI. And how do I achieve? I've spent ton of money on all of my experiments. This is the same thing theme I'm seeing across when talking to everybody who's spending money on AI. I've spent so much money on it. When can I get it live in production? How much, how can I quickly get it? Because you know, the board is breathing down their neck. You already spend this much money. Show me something that's valuable. So the ROI is going to become, take it from me, I'm predicting this for 2023, that's going to become number one. >> Yeah, and if people focus on it, they'll figure it out. Okay. Let's take a look at some of the top players that won, some of the names we just looked at and double click on that and break down their spending profile. So the chart here shows the net score, how net score is calculated. So pay attention to the second set of bars that Databricks, who was pretty prominent on the previous chart. And we've annotated the colors. The lime green is, we're bringing the platform in new. The forest green is, we're going to spend 6% or more relative to last year. And the gray is flat spending. The pinkish is our spending's going to be down on AI and ML, 6% or worse. And the red is churn. So you don't want big red. You subtract the reds from the greens and you get net score, which is shown by those blue dots that you see there. So AWS has the highest net score and very little churn. I mean, single low single digit churn. But notably, you see Databricks and DataRobot are next in line within Microsoft and Google also, they've got very low churn. Andy, what are your thoughts on this data? >> So a couple of things that stands out to me. Most of them are in line with my conversation with customers. Couple of them stood out to me on how bad IBM Watson is doing. >> Yeah, bring that back up if you would. Let's take a look at that. IBM Watson is the far right and the red, that bright red is churning and again, you want low red here. Why do you think that is? >> Well, so look, IBM has been in the forefront of innovating things for many, many years now, right? And over the course of years we talked about this, they moved from a product innovation centric company into more of a services company. And over the years they were making, as at one point, you know that they were making about majority of that money from services. Now things have changed Arvind has taken over, he came from research. So he's doing a great job of trying to reinvent themselves as a company. But it's going to have a long way to catch up. IBM Watson, if you think about it, that played what, jeopardy and chess years ago, like 15 years ago? >> It was jaw dropping when you first saw it. And then they weren't able to commercialize that. >> Yeah. >> And you're making a good point. When Gerstner took over IBM at the time, John Akers wanted to split the company up. He wanted to have a database company, he wanted to have a storage company. Because that's where the industry trend was, Gerstner said no, he came from AMEX, right? He came from American Express. He said, "No, we're going to have a single throat to choke for the customer." They bought PWC for relatively short money. I think it was $15 billion, completely transformed and I would argue saved IBM. But the trade off was, it sort of took them out of product leadership. And so from Gerstner to Palmisano to Remedi, it was really a services led company. And I think Arvind is really bringing it back to a product company with strong consulting. I mean, that's one of the pillars. And so I think that's, they've got a strong story in data and AI. They just got to sort of bring it together and better. Bring that chart up one more time. I want to, the other point is Oracle, Oracle sort of has the dominant lock-in for mission critical database and they're sort of applying AI there. But to your point, they're really not an AI company in the sense that they're taking unstructured data and doing sort of new things. It's really about how to make Oracle better, right? >> Well, you got to remember, Oracle is about database for the structure data. So in yesterday's world, they were dominant database. But you know, if you are to start storing like videos and texts and audio and other things, and then start doing search of vector search and all that, Oracle is not necessarily the database company of choice. And they're strongest thing being apps and building AI into the apps? They are kind of surviving in that area. But again, I wouldn't name them as an AI company, right? But the other thing that that surprised me in that list, what you showed me is yes, AWS is number one. >> Bring that back up if you would, Ken. >> AWS is number one as you, it should be. But what what actually caught me by surprise is how DataRobot is holding, you know? I mean, look at that. The either net new addition and or expansion, DataRobot seem to be doing equally well, even better than Microsoft and Google. That surprises me. >> DataRobot's, and again, this is a function of spending momentum. So remember from the previous chart that Microsoft and Google, much, much larger than DataRobot. DataRobot more niche. But with spending velocity and has always had strong spending velocity, despite some of the recent challenges, organizational challenges. And then you see these other specialists, H2O.ai, Anaconda, dataiku, little bit of red showing there C3.ai. But these again, to stress are the sort of specialists other than obviously the hyperscalers. These are the specialists in AI. All right, so we hit the bigger names in the sector. Now let's take a look at the emerging technology companies. And one of the gems of the ETR dataset is the emerging technology survey. It's called ETS. They used to just do it like twice a year. It's now run four times a year. I just discovered it kind of mid-2022. And it's exclusively focused on private companies that are potential disruptors, they might be M&A candidates and if they've raised enough money, they could be acquirers of companies as well. So Databricks would be an example. They've made a number of investments in companies. SNEAK would be another good example. Companies that are private, but they're buyers, they hope to go IPO at some point in time. So this chart here, shows the emerging companies in the ML AI sector of the ETR dataset. So the dimensions of this are similar, they're net sentiment on the Y axis and mind share on the X axis. Basically, the ETS study measures awareness on the x axis and intent to do something with, evaluate or implement or not, on that vertical axis. So it's like net score on the vertical where negatives are subtracted from the positives. And again, mind share is vendor awareness. That's the horizontal axis. Now that inserted table shows net sentiment and the ends in the survey, which informs the position of the dots. And you'll notice we're plotting TensorFlow as well. We know that's not a company, but it's there for reference as open source tooling is an option for customers. And ETR sometimes like to show that as a reference point. Now we've also drawn a line for Databricks to show how relatively dominant they've become in the past 10 ETS surveys and sort of mind share going back to late 2018. And you can see a dozen or so other emerging tech vendors. So Andy, I want you to share your thoughts on these players, who were the ones to watch, name some names. We'll bring that data back up as you as you comment. >> So Databricks, as you said, remember we talked about how Oracle is not necessarily the database of the choice, you know? So Databricks is kind of trying to solve some of the issue for AI/ML workloads, right? And the problem is also there is no one company that could solve all of the problems. For example, if you look at the names in here, some of them are database names, some of them are platform names, some of them are like MLOps companies like, DataRobot (indistinct) and others. And some of them are like future based companies like, you know, the Techton and stuff. >> So it's a mix of those sub sectors? >> It's a mix of those companies. >> We'll talk to ETR about that. They'd be interested in your input on how to make this more granular and these sub-sectors. You got Hugging Face in here, >> Which is NLP, yeah. >> Okay. So your take, are these companies going to get acquired? Are they going to go IPO? Are they going to merge? >> Well, most of them going to get acquired. My prediction would be most of them will get acquired because look, at the end of the day, hyperscalers need these capabilities, right? So they're going to either create their own, AWS is very good at doing that. They have done a lot of those things. But the other ones, like for particularly Azure, they're going to look at it and saying that, "You know what, it's going to take time for me to build this. Why don't I just go and buy you?" Right? Or or even the smaller players like Oracle or IBM Cloud, this will exist. They might even take a look at them, right? So at the end of the day, a lot of these companies are going to get acquired or merged with others. >> Yeah. All right, let's wrap with some final thoughts. I'm going to make some comments Andy, and then ask you to dig in here. Look, despite the challenge of leveraging AI, you know, Ken, if you could bring up the next chart. We're not repeating, we're not predicting the AI winter of the 1990s. Machine intelligence. It's a superpower that's going to permeate every aspect of the technology industry. AI and data strategies have to be connected. Leveraging first party data is going to increase AI competitiveness and shorten time to value. Andy, I'd love your thoughts on that. I know you've got some thoughts on governance and AI ethics. You know, we talked about ChatGBT, Deepfakes, help us unpack all these trends. >> So there's so much information packed up there, right? The AI and data strategy, that's very, very, very important. If you don't have a proper data, people don't realize that AI is, your AI is the morals that you built on, it's predominantly based on the data what you have. It's not, AI cannot predict something that's going to happen without knowing what it is. It need to be trained, it need to understand what is it you're talking about. So 99% of the time you got to have a good data for you to train. So this where I mentioned to you, the problem is a lot of these companies can't afford to collect the real world data because it takes too long, it's too expensive. So a lot of these companies are trying to do the synthetic data way. It has its own set of issues because you can't use all... >> What's that synthetic data? Explain that. >> Synthetic data is basically not a real world data, but it's a created or simulated data equal and based on real data. It looks, feels, smells, taste like a real data, but it's not exactly real data, right? This is particularly useful in the financial and healthcare industry for world. So you don't have to, at the end of the day, if you have real data about your and my medical history data, if you redact it, you can still reverse this. It's fairly easy, right? >> Yeah, yeah. >> So by creating a synthetic data, there is no correlation between the real data and the synthetic data. >> So that's part of AI ethics and privacy and, okay. >> So the synthetic data, the issue with that is that when you're trying to commingle that with that, you can't create models based on just on synthetic data because synthetic data, as I said is artificial data. So basically you're creating artificial models, so you got to blend in properly that that blend is the problem. And you know how much of real data, how much of synthetic data you could use. You got to use judgment between efficiency cost and the time duration stuff. So that's one-- >> And risk >> And the risk involved with that. And the secondary issues which we talked about is that when you're creating, okay, you take a business use case, okay, you think about investing things, you build the whole thing out and you're trying to put it out into the market. Most companies that I talk to don't have a proper governance in place. They don't have ethics standards in place. They don't worry about the biases in data, they just go on trying to solve a business case >> It's wild west. >> 'Cause that's what they start. It's a wild west! And then at the end of the day when they are close to some legal litigation action or something or something else happens and that's when the Oh Shit! moments happens, right? And then they come in and say, "You know what, how do I fix this?" The governance, security and all of those things, ethics bias, data bias, de-biasing, none of them can be an afterthought. It got to start with the, from the get-go. So you got to start at the beginning saying that, "You know what, I'm going to do all of those AI programs, but before we get into this, we got to set some framework for doing all these things properly." Right? And then the-- >> Yeah. So let's go back to the key points. I want to bring up the cloud again. Because you got to get cloud right. Getting that right matters in AI to the points that you were making earlier. You can't just be out on an island and hyperscalers, they're going to obviously continue to do well. They get more and more data's going into the cloud and they have the native tools. To your point, in the case of AWS, Microsoft's obviously ubiquitous. Google's got great capabilities here. They've got integrated ecosystems partners that are going to continue to strengthen through the decade. What are your thoughts here? >> So a couple of things. One is the last mile ML or last mile AI that nobody's talking about. So that need to be attended to. There are lot of players in the market that coming up, when I talk about last mile, I'm talking about after you're done with the experimentation of the model, how fast and quickly and efficiently can you get it to production? So that's production being-- >> Compressing that time is going to put dollars in your pocket. >> Exactly. Right. >> So once, >> If you got it right. >> If you get it right, of course. So there are, there are a couple of issues with that. Once you figure out that model is working, that's perfect. People don't realize, the moment you decide that moment when the decision is made, it's like a new car. After you purchase the value decreases on a minute basis. Same thing with the models. Once the model is created, you need to be in production right away because it starts losing it value on a seconds minute basis. So issue number one, how fast can I get it over there? So your deployment, you are inferencing efficiently at the edge locations, your optimization, your security, all of this is at issue. But you know what is more important than that in the last mile? You keep the model up, you continue to work on, again, going back to the car analogy, at one point you got to figure out your car is costing more than to operate. So you got to get a new car, right? And that's the same thing with the models as well. If your model has reached a stage, it is actually a potential risk for your operation. To give you an idea, if Uber has a model, the first time when you get a car from going from point A to B cost you $60. If the model decayed the next time I might give you a $40 rate, I would take it definitely. But it's lost for the company. The business risk associated with operating on a bad model, you should realize it immediately, pull the model out, retrain it, redeploy it. That's is key. >> And that's got to be huge in security model recency and security to the extent that you can get real time is big. I mean you, you see Palo Alto, CrowdStrike, a lot of other security companies are injecting AI. Again, they won't show up in the ETR ML/AI taxonomy per se as a pure play. But ServiceNow is another company that you have have mentioned to me, offline. AI is just getting embedded everywhere. >> Yep. >> And then I'm glad you brought up, kind of real-time inferencing 'cause a lot of the modeling, if we can go back to the last point that we're going to make, a lot of the AI today is modeling done in the cloud. The last point we wanted to make here, I'd love to get your thoughts on this, is real-time AI inferencing for instance at the edge is going to become increasingly important for us. It's going to usher in new economics, new types of silicon, particularly arm-based. We've covered that a lot on "Breaking Analysis", new tooling, new companies and that could disrupt the sort of cloud model if new economics emerge. 'Cause cloud obviously very centralized, they're trying to decentralize it. But over the course of this decade we could see some real disruption there. Andy, give us your final thoughts on that. >> Yes and no. I mean at the end of the day, cloud is kind of centralized now, but a lot of this companies including, AWS is kind of trying to decentralize that by putting their own sub-centers and edge locations. >> Local zones, outposts. >> Yeah, exactly. Particularly the outpost concept. And if it can even become like a micro center and stuff, it won't go to the localized level of, I go to a single IOT level. But again, the cloud extends itself to that level. So if there is an opportunity need for it, the hyperscalers will figure out a way to fit that model. So I wouldn't too much worry about that, about deployment and where to have it and what to do with that. But you know, figure out the right business use case, get the right data, get the ethics and governance place and make sure they get it to production and make sure you pull the model out when it's not operating well. >> Excellent advice. Andy, I got to thank you for coming into the studio today, helping us with this "Breaking Analysis" segment. Outstanding collaboration and insights and input in today's episode. Hope we can do more. >> Thank you. Thanks for having me. I appreciate it. >> You're very welcome. All right. I want to thank Alex Marson who's on production and manages the podcast. Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and our newsletters. And Rob Hoof is our editor-in-chief over at Silicon Angle. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, all you got to do is search "Breaking Analysis" podcast. I publish each week on wikibon.com and silicon angle.com or you can email me at david.vellante@siliconangle.com to get in touch, or DM me at dvellante or comment on our LinkedIn posts. Please check out ETR.AI for the best survey data and the enterprise tech business, Constellation Research. Andy publishes there some awesome information on AI and data. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching everybody and we'll see you next time on "Breaking Analysis". (gentle closing tune plays)

Published Date : Dec 29 2022

SUMMARY :

bringing you data-driven Andy, great to have you on the program. and AI at the center of their enterprises. So it's like you found a of the AI use cases," right? I got a glimpse of the January survey, So one of the things and it just notes some of the players So the first one is, Like a And the open AI tool and ChatGPT rather. I have, but it's of all the available text of bodies that you need or some of the others that are on there? One of the things they're So the data historically So here's the thing. So the ROI is going to So the chart here shows the net score, Couple of them stood out to me IBM Watson is the far right and the red, And over the course of when you first saw it. I mean, that's one of the pillars. Oracle is not necessarily the how DataRobot is holding, you know? So it's like net score on the vertical database of the choice, you know? on how to make this more Are they going to go IPO? So at the end of the day, of the technology industry. So 99% of the time you What's that synthetic at the end of the day, and the synthetic data. So that's part of AI that blend is the problem. And the risk involved with that. So you got to start at data's going into the cloud So that need to be attended to. is going to put dollars the first time when you that you can get real time is big. a lot of the AI today is I mean at the end of the day, and make sure they get it to production Andy, I got to thank you for Thanks for having me. and manages the podcast.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Alex MarsonPERSON

0.99+

AndyPERSON

0.99+

Andy ThuraiPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Tom DavenportPERSON

0.99+

AMEXORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Rashmi KumarPERSON

0.99+

Rob HoofPERSON

0.99+

GoogleORGANIZATION

0.99+

UberORGANIZATION

0.99+

KenPERSON

0.99+

OracleORGANIZATION

0.99+

OctoberDATE

0.99+

6%QUANTITY

0.99+

$40QUANTITY

0.99+

January 21DATE

0.99+

ChipotleORGANIZATION

0.99+

$15 billionQUANTITY

0.99+

fiveQUANTITY

0.99+

RashmiPERSON

0.99+

$50,000QUANTITY

0.99+

$60QUANTITY

0.99+

USLOCATION

0.99+

JanuaryDATE

0.99+

AntonioPERSON

0.99+

John AkersPERSON

0.99+

Warren BuffetPERSON

0.99+

late 2018DATE

0.99+

IkeaORGANIZATION

0.99+

American ExpressORGANIZATION

0.99+

MITORGANIZATION

0.99+

PWCORGANIZATION

0.99+

99%QUANTITY

0.99+

HPEORGANIZATION

0.99+

DominoORGANIZATION

0.99+

ArvindPERSON

0.99+

Palo AltoLOCATION

0.99+

30 billionQUANTITY

0.99+

last yearDATE

0.99+

Constellation ResearchORGANIZATION

0.99+

GerstnerPERSON

0.99+

120 billionQUANTITY

0.99+

$100,000QUANTITY

0.99+

Dave Brown, AWS | AWS re:Invent 2021


 

(bright music) >> Welcome back everyone to theCUBE's coverage of AWS re:Invent 2021 in person. So a live event, physical in-person, also virtual hybrid. So a lot of great action online, check out the website. All the videos are there on theCUBE, as well as what's going on all of the actions on site and theCUBE's here. I'm John Furrier, your host with Dave Vellante, my cohost. Finally, we've got David Brown, VP of Elastic Compute Cloud. EC2, the bread and butter. Our favorite part of Amazon. David, great to have you back on theCUBE in person. >> John, it's great to be back. It's the first time I'd been on theCUBE in person as well. A lot of virtual events with you guys, but it's amazing to be back at re:Invent. >> We're so excited for you. I know, Matt Garman and I've talked in the past. We've talked in the past. EC2 is just an amazing product. It's always been the core block of AWS. More and more action happening and developers are now getting more action and there's well, we wrote a big piece about it. What's going on? The Silicon's really paying off. You've got to also general purpose Intel and AMD, and you've got the custom silicon, all working together. What's the new update? Give us a scoop. >> Well, John, it's actually 15 years of EC2 this year and I've been lucky to be on that team for 14 years and so incredible to see the growth. It's been an amazing journey. The thing that's really driven us, two things. One is supporting new workloads. And so what are the workloads that customers have available out there trying to do on the cloud that we don't support and launch new instance types. And that's the first thing. The second one is price performance. How do we give customers more performance at a continuously decreasing price year-over-year? And that's just driven innovation across EC2 over the years with things like Graviton. All of our inferential chips are custom silicon, but also instance types with the latest Intel Ice Lake CPU's, latest Milan. We just announced the AMD Milan instance. It's just constantly innovation across the ever-increasing list of instances. So super exciting. >> So instances become the new thing. Provision an instance, spin up an instance. Instance becomes, and you can get instances, flavors, almost like flavors, right? >> David: Yeah. >> Take us through the difference between an instance and then the EC2 itself. >> That's correct, yeah. So we actually have, by end of the year, right now we have over 475 different instances available to you whether it's GPU accelerators, high-performance computing instances, memory optimized, just enormous number. We'll actually hit 500 by the end of the year, but that is it. I mean, customers are looking for different types of machines and those are the instances. >> So the Custom Silicon, it's one of the most interesting developments. We've written about it. AWS secret weapon is one of them. I wonder if you could take us back to the decision points and the journey. The Annapurna acquisition, you started working with them as a partner, then you said, all right, let's just buy the company. >> David: Yeah. >> And then now, you're seeing the acceleration, your time to tapeout is way, way compressed. Maybe what was the catalyst and maybe we can get into where it's going. >> Yeah, absolutely. Super interesting story 'cause it actually starts all the way back in 2008. In 2008, EC2 had actually been around for just a little under two years. And if you remember back then, everybody was like, will virtualize and hypervisors, specialization would never really get you the same performances, what they were calling bare metal back then. Everybody's looking at the cloud. And so we took a look at that. And I mean, network latencies, in some cases with hypervisors were as high as 200 or 300 milliseconds. And it was a number of real challenges. And so we knew that we would have to change the way that virtualization works and get into hardware. And so in 2010, 2011, we started to look at how could I offload my network processing, my IO processing to additional hardware. And that's what we delivered our first Nitro card in 2012 and 2013. We actually offloaded all of the processing of network to a Nitro card. And that Nitro card actually had a Annapurna arm chip on it. Our Nitro 1 chip. >> For the offload? >> The offload card, yeah. And so that's when my team started to code for Arm. We started to work on our Linux works for Arm. We actually had to write our own operating system initially 'cause there weren't any operating systems available we could use. And so that's what we started this journey. And over the years, when we saw how well it worked for networking, we said, let's do it for storage as well. And then we said, Hey, we could actually improve security significantly. And by 2017, we'd actually offloaded 100% of everything we did on that server to our offload cards Leaving a 100% of the server available for customers. And we're still actually the only cloud provider that does that today. >> Just to interject, in the data center today, probably 30% of the general purpose cores are used for offloads. You're saying 0% in the cloud. >> On our nitro instances, so every instance we've launched since 2017, our C5. We use 0% of that central core. And you can actually see that in our instance types. If you look at our largest instance type, you can see that we're giving you 96 cores and we're giving you, and our largest instance, 24 terabytes of memory. We're not giving you 23.6 terabytes 'cause we need some. It's all given to you as the customer. >> So much more efficient, >> Much, much more efficient, much better, better price performance as well. But then ultimately those Nitro chips, we went through Nitro 1, Nitro 2, Nitro 3, Nitro 4. We said, Hey, could we build a general purpose server chip? Could we actually bring Arm into the cloud? And in 2018, we launched the A1 instance, which was our Graviton1 instance. And what we didn't tell people at the time is that it was actually the same chip we were using on our network card. So essentially, it was a network card that we were giving to you as a server. But what it did is it sparked the ecosystem. That's why we put it out there. And I remember before launch, some was saying, is this just going to be a university project? Are we going to see people from big universities using Arm in the cloud? Was it really going to take off? And the response was amazing. The ecosystem just grew. We had customers move to it and immediately begin to see improvements. And we knew that a year later, Graviton2 was going to come out. And Graviton2 was just an amazing chip. It continues to see incredible adoption, 40% price performance improvement over other instances. >> So this is worth calling out because I think that example of the network card, I mean, innovation can come from anywhere. This is what Jassy always would say is do the experiments. Think about the impact of what's going on here. You're focused on a mission. Let's get that processing of the lowest cost, pick up some workloads. So you're constantly tinkering with tuning the engine. New discovery comes in. Nitro is born. The chip comes in. But I think the fundamental thing, and I want to get your reaction to this 'cause we've put this out there on our post on Sunday. And I said, in every inflection point, I'm old enough, my birthday was yesterday. I'm old enough to know that. >> David: I saw that. >> I'm old enough to know that in the eighties, the client server shifts. Every inflection point where development changed, the methodology, the mindset or platforms change, all the apps went to the better platform. Who wants to run their application on a slower platform? And so, and those inflects. So now that's happening now, I believe. So you got better performance and I'm imagining that the app developers are coding for it. Take us through how you see that because okay, you're offering up great performance for workloads. Now it's cloud workloads. That's almost all apps. Can you comment on that? >> Well, it has been really interesting to see. I mean, as I said, we were unsure who was going to use it when we initially launched and the adoption has been amazing. Initially, obviously it's always, a lot of the startups, a lot of the more agile companies that can move a lot faster, typically a little bit smaller. They started experimenting, but the data got out there. That 40% price performance was a reality. And not only for specific workloads, it was broadly successful across a number of workloads. And so we actually just had SAP who obviously is an enormous enterprise, supporting enterprises all over the world, announced that they are going to be moving the S/4 HANA Cloud to run on Graviton2. It's just phenomenal. And we've seen enterprises of that scale and game developers, every single vertical looking to move to Graviton2 and get that 40% price performance. >> Now we have to, as analysts, we have to say, okay, how did you get to that 40%? And you have to make some assumptions obviously. And it feels like you still have some dry powder when you looked at Graviton2. I think you were running, I don't know, it's speculated anyway. I don't know if you guys, it's your data, two and a half, 2.5 gigahertz. >> David: Yeah. >> I don't know if we can share what's going on with Graviton3, but my point is you had some dry powder and now with Graviton3, quite a range of performance, 'cause it really depends on the workload. >> David: That's right. >> Maybe you could give some insight as to that. What can you share about how you tuned Graviton3? >> When we look at benchmarking, we don't want to be trying to find that benchmark that's highly tuned and then put out something that is, Hey, this is the absolute best we can get it to and that's 40%. So that 40% is actually just on average. So we just went and ran real world workloads. And we saw some that were 55%. We saw some that were 25. It depends on what it was, but on average, it was around the 35, 45%, and we said 40%. And the great thing about that is customers come back and say, Hey, we saw 40% in this workload. It wasn't that I had to tune it. And so with Graviton3, launching this week. Available in our C7g instance, we said 25%. And that is just a very standard benchmark in what we're seeing. And as we start to see more customer workloads, I think it's going to be incredible to see what that range looks like. Graviton2 for single-threaded applications, it didn't give you that much of a performance. That's what we meant by cloud applications, generally, multi-threaded. In Graviton3, that's no longer the case. So we've had some customers report up to 80% performance improvements of Graviton2 to Graviton3 when the application was more of a single-threaded application. So we started to see. (group chattering) >> You have to keep going, the time to market is compressing. So you have that, go ahead, sorry. >> No, no, I always want to add one thing on the difference between single and multi-threaded applications. A lot of legacy, you're single threaded. So this is kind of an interesting thing. So the mainframe, migration stuff, you start to see that. Is that where that comes in the whole? >> Well, a lot of the legacy apps, but also even some of the new apps, like single threading like video transcoding, for example, is all done on a single core. It's very difficult. I mean, almost impossible to do that multi-threaded way. A lot of the crypto algorithms as well, encryption and cryptography is often single core. So with Graviton3, we've seen a significant performance boost for video encoding, cryptographic algorithms, that sort of thing, which really impacts even the most modern applications. >> So that's an interesting point because now single threaded is where the vertical use cases come in. It's not like more general purpose OS kind of things. >> Yeah, and Graviton has already been very broad. I think we're just knocking down the last few verticals where maybe it didn't support it and now it absolutely does. >> And if an ISV then ports, like an SAP's ports to Graviton, then the customer doesn't see any, I mean, they're going to see the performance difference, but they don't have to think about it. >> David: Yeah. >> They just say, I choose that instance and I'm going to get better price performance. >> Exactly, so we've seen that from our ISVs. We've also been doing that with our AWS services. So services like EMR, RDS, Elastic Cache, it will be moving and making Graviton2 available for customers, which means the customer doesn't have to do the migration at all. It's all done for them. They just pick the instance and get the price performance benefits, and so yeah. >> I think, oh, no, that was serverless. Sorry. >> Well, Lambda actually just did launch on Graviton2. And I think they were talking about a 35% price performance improvement. >> Who was that? >> Lambda, a couple of months ago. >> So what does an ISV have to do to port to Graviton. >> It's relatively straightforward, and this is actually one of the things that has slowed customers down is the, wow, that must be a big migration. And that ecosystem that I spoke about is the important part. And today, with all the Linux operating systems being available for Arm running on Graviton2, with all of the container runtimes being available, and then slowly open source applications in ISV is being available. It's actually really, really easy. And we just ran the Graviton2 four-day challenge. And we did that because we actually had an enterprise migrate one of the largest production applications in just four days. Now, I probably wouldn't recommend that to most enterprises that we see is a little too fast, but they could actually do that. >> But just from a numbers standpoint, that's insanely amazing. I mean, when you think about four days. >> Yeah. >> And when we talked on virtually last year, this year, I can't remember now. You said, we'll just try it. >> David: That's right. >> And see what happens, so I presume a lot of people have tried it. >> Well, that's my advice. It's the unknown, it's the what will it take? So take a single engineer, tell them and give them a time. Say you have one week, get this running on Graviton2, and I think the results are pretty amazing, very surprised. >> We were one of the first, if not the first to say that Arm is going to be dominant in the enterprise. We know it's dominant in the Edge. And when you look at the performance curves and the time to tape out, it's just astounding. And I don't know if people appreciate that relative to the traditional Moore's law curve. I mean, it's a style. And then when you combine the power of the CPU, the GPU, the NPU, kind of what Apple does in the iPhone, it blows away the historical performance curves. And you're on that curve. >> That's right. >> I wonder if you could sort of explain that. >> So with Graviton, we're optimizing just across every single part of AWS. So one of the nice things is we actually own that end-to-end. So when it starts with the early design of Graviton2 and Graviton3, and we obviously working on other chips right now. We're actually using the cloud to do all of the electronic design automation. So we're able to test with AWS how that Graviton3 chip is going to work long before we've even started taping it out. And so those workloads are running on high-frequency CPU's on Graviton. Actually we're using Graviton to build Graviton now in the cloud. The other thing we're doing is we're making sure that the Annapurna team that's building those CPUs is deeply engaged with my team and we're going to ultimately go and build those instances so that when that chip arrives from tapeout. I'm not waiting nine months or two years, like would normally be the case, but I actually had an instance up and running within a week or two on somebody's desk studying to do the integration. And that's something we've optimized significantly to get done. And so it allows us to get that iteration time. It also allows us to be very, very accurate with our tapeouts. We're not having to go back with Graviton. They're all A1 chips. We're not having to go back and do multiple runs of these things because we can do so much validation and performance testing in the cloud ahead of time. >> This is the epiphany of the Arm model. >> It really is. >> It's a standard. When you send it to the fab, they know what's going to work. You hit volume and it's just no fab. >> Well, this is a great thread. We'll stay on this 'cause Adam told us when we met with them for re:Invent that they're seeing a lot more visibility into use cases at the scale. So the scale gives you an advantage on what instances might work. >> And makes the economics works. >> Makes the economics work, hence the timing, the shrinking time to market, not there, but also for the apps. Talk about the scale advantage you guys have. >> Absolutely. I mean, the scale advantage of AWS plays out in a number of ways for our customers. The first thing is being able to deliver highly optimized hardware. So we don't just look at the Graviton3 CPU, you were speaking about the core count and the frequency and Peter spoke about a lot of that in his keynote yesterday. But we look at how does the Graviton3 CPU work with the rest of the instance. What is the right balance between the CPU and memory? The CPU and the Hydro. What's the performance and the drive? We just launched the Nitro SSD, which is now we've actually building our own custom SSDs for Nitro getting better performance, being able to do updates, better security, making it more cloudy. We're just saying, we've been challenged with the SSD in the parts. The other place that scales really helping is in capacity. Being able to make sure that we can absorb things like the COVID spike, or the stuff you see in the financial industry with just enormous demand for compute. We can do that because of our scale. We are able to scale. And the final area is actually in quality because I have such an enormous fleet. I'm actually able to drive down AFR. So annual failure rates, are we well below what the mathematical theoretical tenant or possibility is? So if you look at what's put on that actual sticker on the box that says you should be able to get a full percent AFR. At scale and with focus, we're actually able to get that down to significantly below what the mathematical entitlement was actually be. >> Yeah, it's incredible. I've got a great, and this is the advantage, and that's why I believe anyone who's writing applications that has includes a database, data transfer, any kind of execution of code will use the stack. >> Why would they? Really, why? We've seen this, like you said before, whether it was PC, then the fastest Pentium or somebody. >> Why would you want your app to run slower? >> Unix box, right? ISVS want it to run as fast and as cheaply as possible. Now power plays into it as well. >> Yeah, well, we do have, I agree with what you're saying. We do have a number of customers that are still looking to run on x86, but obviously customers that want windows. Windows isn't available for Arm and so that's a challenge. They'll continue to do that. And you know the way we do look at it is most law kind of died out on us in 2002, 2003. And what I'm hoping is, not necessarily bringing wars a little back, but then we say, let's not accept the 10%, 15% improvement year-over-year. There's absolutely more we can all be doing. And so I'm excited to see where the x86 world's going and they doing a lot of great stuff. Intel Ice Lakes looking amazing. Milan is really great to have an AWS as well. >> Well, I'm thinking it's fair point 'cause we certainly look what Pat's doing it at Intel and he's remaking the company. I've said he's going to follow on the Arm playbook in my mind a little bit, and which is the right thing to do. So competition is a good thing. >> David: Absolutely. >> We're excited for you and a great to see Graviton and you guys have this kind of inflection point. We've been tracking for a while, but now the world's starting to see it. So congratulations to your team. >> David: Thank you. >> Just a couple of things. You guys have some news on instances. Talk about the deprecation issue and how you guys are keeping instances alive real quick. >> Yeah, we're super customer obsessed at Amazon. And so that really drives us. And one of the worst things for us to do is to have to tell a customer that we no longer supporting a service. We recently actually just deprecated the ECG classic network. I'm not sure if you saw that and that's actually off the 10 years of continuing to support it. And the only reason we did it is we have a tiny percentage of customers still using that from back in 2012. But one of the challenges is obviously instance hardware eventually will ultimately time out and fail and have hardware issues as it gets older and older. And so we didn't want to be in a place, in EC2, where we would have to constantly go to customers and say that M1 small, that C3, whatever you were running, it's no longer supported, please move. That's just a text that customers shouldn't have to do. And if they still getting value out of an older instance, let them keep using it. So we actually just announced at re:Invent, in my keynote on Tuesday, the longevity support for EC2 instances, which means we will never come back to you again and ask you to please get off an instance, because we can actually emulate all those instances on our Nitro system. And so all of these instances are starting to migrate to Nitro. You're getting all the benefits of Nitro for now some of our older zen instances, but also you don't have to worry about that work. That's just not something you need to do to get off in all the instance. >> That's great. That's a great test service. Stay on as long as you want. When you're ready to move, move. Okay, final question for you. I know we've got time, I want to get this in. The global network, you guys are known for AWS cloud WAN serve. Gives you updates on what's going on with that. >> So Werner just announced that in his keynote and over the last two to three years or so, we've seen a lot of customers starting to use the AWS backbone, which is extensive. I mean, you've seen the slides in Werner's keynote. It really does span the world. I think it's probably one of the largest networks out there. Customers starting to use that for actually their branch office communication. So instead of going and provisioning the own international MPLS networks and that sort of thing, they say, let me onboard to AWS with VPN or direct connect, and I can actually run the AWS backbone around the world. Now doing that actually has some complexity. You got to think about transit gateways. You got to think about those inter-region peering. And AWS cloud when takes all of that complexity away, you essentially create a cloud WAN, connecting to it to VPN or direct connect, and you can even go and actually set up network segments. So essentially VLANs for different parts of the organization. So super excited to get out that out of there. >> So the ease of use is the key there. >> Massively easy to use. and we have 26 SD-WAN partners. We even partnering with folks like Verizon and Swisscom in Switzerland to telco to actually allow them to use it for their customers as well. >> We'll probably use your service someday when we have a global rollout date. >> Let's do that, CUBE Global. And then the other was the M1 EC2 instance, which got a lot of applause. >> David: Absolutely. >> M1, I think it was based on A15. >> Yeah, that's for Mac. We've got to be careful 'cause M1 is our first instance as well. >> Yeah right, it's a little confusion there. >> So it's a Mac. The EC2 Mac is with M1 silicon from Apple, which super excited to put out there. >> Awesome. >> David Brown, great to see you in person. Congratulations to you and the team and all the work you guys have done over the years. And now that people starting to realize the cloud platform, the compute just gets better and better. It's a key part of the system. >> Thanks John, it's great to be here. >> Thanks for sharing. >> The SiliconANGLE is here. We're talking about custom silicon here on AWS. I'm John Furrier with Dave Vellante. You're watching theCUBE. The global leader in tech coverage. We'll be right back with more covers from re:Invent after this break. (bright music)

Published Date : Dec 2 2021

SUMMARY :

all of the actions on site A lot of virtual events with you guys, It's always been the core block of AWS. And that's the first thing. So instances become the new thing. and then the EC2 itself. available to you whether So the Custom Silicon, seeing the acceleration, of the processing of network And over the years, when we saw You're saying 0% in the cloud. It's all given to you as the customer. And the response was amazing. example of the network card, and I'm imagining that the app a lot of the more agile companies And it feels like you 'cause it really depends on the workload. some insight as to that. And the great thing about You have to keep going, the So the mainframe, migration Well, a lot of the legacy apps, So that's an interesting down the last few verticals but they don't have to think about it. and I'm going to get and get the price performance I think, oh, no, that was serverless. And I think they were talking about a 35% to do to port to Graviton. about is the important part. I mean, when you think about four days. And when we talked And see what happens, so I presume the what will it take? and the time to tape out, I wonder if you could that the Annapurna team When you send it to the fab, So the scale gives you an advantage the shrinking time to market, or the stuff you see in and that's why I believe anyone We've seen this, like you said before, and as cheaply as possible. And so I'm excited to see is the right thing to do. and a great to see Graviton Talk about the deprecation issue And the only reason we did it Stay on as long as you want. and over the last two and Swisscom in Switzerland to We'll probably use your service someday the M1 EC2 instance, We've got to be careful little confusion there. The EC2 Mac is with M1 silicon from Apple, and all the work you guys The SiliconANGLE is here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

David BrownPERSON

0.99+

VerizonORGANIZATION

0.99+

PeterPERSON

0.99+

WernerPERSON

0.99+

SwisscomORGANIZATION

0.99+

Matt GarmanPERSON

0.99+

JohnPERSON

0.99+

2008DATE

0.99+

AMDORGANIZATION

0.99+

AdamPERSON

0.99+

John FurrierPERSON

0.99+

SwitzerlandLOCATION

0.99+

Dave BrownPERSON

0.99+

SundayDATE

0.99+

40%QUANTITY

0.99+

30%QUANTITY

0.99+

2010DATE

0.99+

14 yearsQUANTITY

0.99+

100%QUANTITY

0.99+

2011DATE

0.99+

AmazonORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

2002DATE

0.99+

2012DATE

0.99+

15%QUANTITY

0.99+

25QUANTITY

0.99+

23.6 terabytesQUANTITY

0.99+

nine monthsQUANTITY

0.99+

TuesdayDATE

0.99+

10 yearsQUANTITY

0.99+

10%QUANTITY

0.99+

96 coresQUANTITY

0.99+

two yearsQUANTITY

0.99+

last yearDATE

0.99+

four daysQUANTITY

0.99+

2018DATE

0.99+

55%QUANTITY

0.99+

2013DATE

0.99+

2017DATE

0.99+

200QUANTITY

0.99+

2003DATE

0.99+

24 terabytesQUANTITY

0.99+

PatPERSON

0.99+

AppleORGANIZATION

0.99+

one weekQUANTITY

0.99+

four-dayQUANTITY

0.99+

IntelORGANIZATION

0.99+

25%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

two and a halfQUANTITY

0.99+

this yearDATE

0.99+

yesterdayDATE

0.99+

a year laterDATE

0.99+

firstQUANTITY

0.99+

Elastic Compute CloudORGANIZATION

0.99+

500QUANTITY

0.99+

Breaking Analysis - How AWS is Revolutionizing Systems Architecture


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante aws is pointing the way to a revolution in system architecture much in the same way that aws defined the cloud operating model last decade we believe it is once again leading in future systems design the secret sauce underpinning these innovations is specialized designs that break the stranglehold of inefficient and bloated centralized processing and allows aws to accommodate a diversity of workloads that span cloud data center as well as the near and far edge hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll dig into the moves that aws has been making which we believe define the future of computing we'll also project what this means for customers partners and aws many competitors now let's take a look at aws's architectural journey the is revolution it started by giving easy access as we all know to virtual machines that could be deployed and decommissioned on demand amazon at the time used a highly customized version of zen that allowed multiple vms to run on one physical machine the hypervisor functions were controlled by x86 now according to werner vogels as much as 30 of the processing was wasted meaning it was supporting hypervisor functions and managing other parts of the system including the storage and networking these overheads led to aws developing custom asics that help to accelerate workloads now in 2013 aws began shipping custom chips and partnered with amd to announce ec2 c3 instances but as the as the aws cloud started to scale they really weren't satisfied with the performance gains that they were getting and they were hitting architectural barriers that prompted aws to start a partnership with anaperta labs this was back in 2014 and they launched then ec2 c4 instances in 2015. the asic in c4 optimized offload functions for storage and networking but still relied on intel xeon as the control point aws aws shelled out a reported 350 million dollars to acquire annapurna in 2015 which is a meager sum to acquire the secret sauce of its future system design this acquisition led to a modern version of project nitro in 2017 nitro nitro offload cards were first introduced in 2013 at this time aws introduced c5 instances and replaced zen with kvm and more tightly coupled the hypervisor with the asic vogels shared last year that this milestone offloaded the remaining components including the control plane the rest of the i o and enabled nearly a hundred percent of the processing to support customer workloads it also enabled a bare metal version of the compute that spawned the partnership the famous partnership with vmware to launch vmware cloud on aws then in 2018 aws took the next step and introduced graviton its custom designed arm-based chip this broke the dependency on x86 and launched a new era of architecture which now supports a wide variety of configurations to support data intensive workloads now these moves preceded other aws innovations including new chips optimized for machine learning and training and inferencing and all kinds of ai the bottom line is aws has architected an approach that offloaded the work currently done by the central processing unit in most general purpose workloads like in the data center it has set the stage in our view for the future allowing shared memory memory disaggregation and independent resources that can be configured to support workloads from the cloud all the way to the edge and nitro is the key to this architecture and to summarize aws nitro think of it as a set of custom hardware and software that runs on an arm-based platform from annapurna aws has moved the hypervisor the network the storage virtualization to dedicated hardware that frees up the cpu to run more efficiently this in our opinion is where the entire industry is headed so let's take a look at that this chart pulls data from the etr data set and lays out key players competing for the future of cloud data center and the edge now we've superimposed nvidia up top and intel they don't show up directly in the etr survey but they clearly are platform players in the mix we covered nvidia extensively in previous breaking analysis and won't go too deep there today but the data shows net scores on the vertical axis that's a measure of spending velocity and then it shows market share in the horizontal axis which is a measure of pervasiveness within the etr data set we're not going to dwell on the relative positions here rather let's comment on the players and start with aws we've laid out aws how they got here and we believe they are setting the direction for the future of the industry and aws is really pushing migration to its arm-based platforms pat morehead at the 6-5 summit spoke to dave brown who heads ec2 at aws and he talked extensively about migrating from x86 to aws's arm-based graviton 2. and he announced a new developer challenge to accelerate that migration to arm instances graviton instances and the end game for customers is a 40 better price performance so a customer running 100 server instances can do the same work with 60 servers now there's some work involved but for the by the customers to actually get there but the payoff if they can get 40 improvement in price performance is quite large imagine this aws currently offers 400 different ec2 instances last year as we reported sorry last year as we reported earlier this year nearly 50 percent of the new ec2 instances so nearly 50 percent of the new ec2 instances shipped in 2020 were arm based and aws is working hard to accelerate this pace it's very clear now let's talk about intel i'll just say it intel is finally responding in earnest and basically it's taking a page out of arm's playbook we're going to dig into that a bit today in 2015 intel paid 16.7 billion dollars for altera a maker of fpgas now also at the 6.5 summit nevin shenoy of intel presented details of what intel is calling an ipu it's infrastructure processing unit this is a departure from intel norms where everything is controlled by a central processing unit ipu's are essentially smart knicks as our dpus so don't get caught up in all the acronym soup as we've reported it's all about offloading work and disaggregating memory and evolving socs system-on-chip and sops system on package but just let this sink in a bit a bit for a moment intel's moves this past week it seems to us anyway are designed to create a platform that is nitro like and the basis of that platform is a 16.7 billion dollar acquisition just compare that to aws's 350 million dollar tuck-in of annapurna that is incredible now chenoy said in his presentation rough quote we've already deployed ipu's using fpgas in a in very high volume at microsoft azure and we've recently announced partnerships with baidu jd cloud and vmware so let's look at vmware vmware is the other you know really big platform player in this race in 2020 vmware announced project monterrey you might recall that it's based on the aforementioned fpgas from intel so vmware is in the mix and it chose to work with intel most likely for a variety of reasons one of the obvious ones is all the software that's running on on on vmware it's been built for x86 and there's a huge install base there the other is pat was heading vmware at the time and and you know when project monterey was conceived so i'll let you connect the dots if you like regardless vmware has a nitro like offering in our view its optionality however is limited by intel but at least it's in the game and appears to be ahead of the competition in this space aws notwithstanding because aws is clearly in the lead now what about microsoft and google suffice it to say that we strongly believe that despite the comments that intel made about shipping fpgas and volume to microsoft that both microsoft and google as well as alibaba will follow aws's lead and develop an arm-based platform like nitro we think they have to in order to keep pace with aws now what about the rest of the data center pack well dell has vmware so despite the split we don't expect any real changes there dell is going to leverage whatever vmware does and do it better than anyone else cisco is interesting in that it just revamped its ucs but we don't see any evidence that it has a nitro like plans in its roadmap same with hpe now both of these companies have history and capabilities around silicon cisco designs its own chips today for carrier class use cases and and hpe as we've reported probably has some remnants of the machine hanging around but both companies are very likely in our view to follow vmware's lead and go with an intel based design what about ibm well we really don't know we think the best thing ibm could do would be to move the ibm cloud of course to an arm-based nitro-like platform we think even the mainframe should move to arm as well i mean it's just too expensive to build a specialized mainframe cpu these days now oracle they're interesting if we were running oracle we would build an arm-based nitro-like database cloud where oracle the database runs cheaper faster and consumes less energy than any other platform that would would dare to run oracle and we'd go one step further and we would optimize for competitive databases in the oracle cloud so we would make oci run the table on all databases and be essentially the database cloud but you know back to sort of fpgas we're not overly excited about about the market amd is acquiring xi links for 35 billion dollars so i guess that's something to get excited about i guess but at least amd is using its inflated stock price to do the deal but we honestly we think that the arm ecosystem will will obliterate the fpga market by making it simpler and faster to move to soc with far better performance flexibility integration and mobility so again we're not too sanguine about intel's acquisition of altera and the moves that amd is making in in the long term now let's take a deeper look at intel's vision of the data center of the future here's a chart that intel showed depicting its vision of the future of the data center what you see is the ipu's which are intelligent nixed and they're embedded in the four blocks shown and they're communicating across a fabric now you have general purpose compute in the upper left and machine intelligent on the bottom left machine intelligence apps and up in the top right you see storage services and then the bottom right variation of alternative processors and this is intel's view of how to share resources and go from a world where everything is controlled by a central processing unit to a more independent set of resources that can work in parallel now gelsinger has talked about all the cool tech that this will allow intel to incorporate including pci and gen 5 and cxl memory interfaces and or cxl memory which are interfaces that enable memory sharing and disaggregation and 5g and 6g connectivity and so forth so that's intel's view of the future of the data center let's look at arm's vision of the future and compare them now there are definite similarities as you can see especially on the right hand side of this chart you've got the blocks of different process processor types these of course are programmable and you notice the high bandwidth memory the hbm3 plus the ddrs on the two sides kind of bookending the blocks that's shared across the entire system and it's connected by pcie gen 5 cxl or ccix multi-die socket so you know you may be looking to say okay two sets of block diagrams big deal well while there are similarities around disaggregation and i guess implied shared memory in the intel diagram and of course the use of advanced standards there are also some notable differences in particular arm is really already at the soc level whereas intel is talking about fpgas neoverse arms architecture is shipping in test mode and we'll have end market product by year end 2022 intel is talking about maybe 2024 we think that's aspirational or 2025 at best arm's road map is much more clear now intel said it will release more details in october so we'll pay attention then maybe we'll recalibrate at that point but it's clear to us that arm is way further along now the other major difference is volume intel is coming at this from a high data center perspective and you know presumably plans to push down market or out to the edge arm is coming at this from the edge low cost low power superior price performance arm is winning at the edge and based on the data that we shared earlier from aws it's clearly gaining ground in the enterprise history strongly suggests that the volume approach will win not only at the low end but eventually at the high end so we want to wrap by looking at what this means for customers and the partner ecosystem the first point we'd like to make is follow the consumer apps this capability the capabilities that we see in consumer apps like image processing and natural language processing and facial recognition and voice translation these inference capabilities that are going on today in mobile will find their way into the enterprise ecosystem ninety percent of the cost associated with machine learning in the cloud is around inference in the future most ai in the enterprise and most certainly at the edge will be inference it's not today because it's too expensive this is why aws is building custom chips for inferencing to drive costs down so it can increase adoption now the second point is we think that customers should start experimenting and see what you can do with arm-based platforms moore's law is accelerating at least the outcome of moore's law the doubling of performance every of the 18 to 24 months it's it's actually much higher than that now when you add up all the different components in these alternative processors just take a look at apple's a5 a15 chip and arm is in the lead in terms of performance price performance cost and energy consumption by moving some workloads onto graviton for example you'll see what types of cost savings you can drive for which applications and possibly generate new applications that you can deliver to your business put a couple engineers in the task and see what they can do in two or three weeks you might be surprised or you might say hey it's too early for us but you'll find out and you may strike gold we would suggest that you talk to your hybrid cloud provider as well and find out if they have a nitro we shared that vmware they've got a clear path as does dell because they're you know vmware cousins what about your other strategic suppliers what's their roadmap what's the time frame to move from where they are today to something that resembles nitro do they even think about that how do they think about that do they think it's important to get there so if if so or if not how are they thinking about reducing your costs and supporting your new workloads at scale now for isvs these consumer capabilities that we discussed earlier all these mobile and and automated systems and cars and and things like that biometrics another example they're going to find their way into your software and your competitors are porting to arm they're embedding these consumer-like capabilities into their apps are you we would strongly recommend that you take a look at that talk to your cloud suppliers and see what they can do to help you innovate run faster and cut costs okay that's it for now thanks to my collaborator david floyer who's been on this topic since early last decade thanks to the community for your comments and insights and hey thanks to patrick morehead and daniel newman for some timely interviews from your event nice job fellas remember i published each week on wikibon.com and siliconangle.com these episodes are all available as podcasts just search for breaking analysis podcasts you can always connect with me on twitter at d vallante or email me at david.velante at siliconangle.com i appreciate the comments on linkedin and clubhouse so follow us if you see us in a room jump in and let's riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time

Published Date : Jun 18 2021

SUMMARY :

and nitro is the key to this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2013DATE

0.99+

2015DATE

0.99+

dave brownPERSON

0.99+

2014DATE

0.99+

2020DATE

0.99+

2017DATE

0.99+

david floyerPERSON

0.99+

60 serversQUANTITY

0.99+

2018DATE

0.99+

last yearDATE

0.99+

18QUANTITY

0.99+

microsoftORGANIZATION

0.99+

twoQUANTITY

0.99+

daniel newmanPERSON

0.99+

35 billion dollarsQUANTITY

0.99+

alibabaORGANIZATION

0.99+

16.7 billion dollarsQUANTITY

0.99+

16.7 billion dollarQUANTITY

0.99+

2025DATE

0.99+

second pointQUANTITY

0.99+

ninety percentQUANTITY

0.99+

siliconangle.comOTHER

0.99+

octoberDATE

0.99+

350 million dollarsQUANTITY

0.99+

dave vellantePERSON

0.99+

2024DATE

0.99+

bothQUANTITY

0.99+

googleORGANIZATION

0.99+

nvidiaORGANIZATION

0.99+

amdORGANIZATION

0.99+

bostonLOCATION

0.99+

first pointQUANTITY

0.99+

both companiesQUANTITY

0.99+

three weeksQUANTITY

0.99+

24 monthsQUANTITY

0.99+

appleORGANIZATION

0.98+

30QUANTITY

0.98+

todayDATE

0.98+

gravitonTITLE

0.98+

each weekQUANTITY

0.98+

nearly 50 percentQUANTITY

0.98+

awsORGANIZATION

0.98+

earlier this yearDATE

0.98+

100 server instancesQUANTITY

0.98+

amazonORGANIZATION

0.98+

two sidesQUANTITY

0.98+

intelORGANIZATION

0.98+

400 differentQUANTITY

0.97+

early last decadeDATE

0.97+

twitterORGANIZATION

0.97+

linkedinORGANIZATION

0.97+

40 improvementQUANTITY

0.97+

x86TITLE

0.96+

last decadeDATE

0.96+

ciscoORGANIZATION

0.95+

oracleORGANIZATION

0.95+

chenoyPERSON

0.95+

40 betterQUANTITY

0.95+

vmwareORGANIZATION

0.95+

350 million dollarQUANTITY

0.94+

nitroORGANIZATION

0.92+

Breaking Analysis: Debunking the Cloud Repatriation Myth


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante cloud repatriation is a term often used by technology companies the ones that don't operate a public cloud the marketing narrative most typically implies that customers have moved work to the public cloud and for a variety of reasons expense performance security etc are disillusioned with the cloud and as a result are repatriating workloads back to their safe comfy and cost-effective on-premises data center while we have no doubt this does sometimes happen the data suggests that this is a single digit de minimis phenomenon hello and welcome to this week's wikibon cube insights powered by etr some have written about the repatriation myth but in this breaking analysis we'll share hard data that we feel debunks the narrative and is currently being promoted by some we'll also take this opportunity to do our quarterly cloud revenue update and share with you our latest figures for the big four cloud vendors let's start by acknowledging that the definition of cloud is absolutely evolving and in this sense much of the vendor marketing is valid no longer is cloud just a distant set of remote services that lives up there in the cloud the cloud is increasingly becoming a ubiquitous sensing thinking acting set of resources that touches nearly every aspect of our lives the cloud is coming on prem and work is being done to connect clouds to each other and the cloud is extending to the near and far edge there's little question about that today's cloud is not just compute storage connectivity and spare capacity but increasingly it's a variety of services to analyze data and predict slash anticipate changes monitor and interpret streams of information apply machine intelligence to data to optimize business outcomes it's tooling to share data protect data visualize data and bring data to life supporting a whole new set of innovative applications notice there's a theme there data increasingly the cloud is where the high value data lives from a variety of sources and it's where organizations go to mine it because the cloud vendors have the best platforms for data and this is part of why the repatriation narrative is somewhat dubious actually a lot dubious because the volume of data in the cloud is growing at rates much faster than data on prem at least by a couple thousand basis points by our estimates annually so cloud data is where the action is and we'll talk about the edge in a moment but a new era of application development is emerging with containers at the center the concept of write wants run anywhere allows developers to take advantage of systems that run on-prem say a transaction system and tap data from multiple sources in various locations there might be multiple clouds or at the edge or wherever and combine that with immense cheap processing power that we've discussed extensively in previous breaking analysis episodes and you see this new breed of apps emerging that's powered by ai those are hitting the market so this is not a zero-sum game the cloud vendors have given the world an infrastructure gift by spending like crazy on capex more than a hundred billion last year on capex for example for the big four and in our view the players that don't own a cloud should stop being so defensive about it they should thank the hyperscalers and lay out a vision as to how they'll create a new abstraction layer on top of the public cloud and you know that's what they're doing and they'll certainly claim to be actively working on this vision but consider the pace of play between the hyperscalers and their traditional on-prem providers we believe the innovation gap is actually widening meaning the public cloud players are accelerating their innovation lead and will 100 compete for hybrid applications they have the resources the developer affinity they're doing custom silicon and have the expertise there and the tam expansion goals that loom large so while it's not a zero-sum game and hybrid is definitely real we think the cloud vendors continue to gain share most rapidly unless the hybrid crowd can move faster now of course there's the edge and that is a wild card but it seems that again the cloud players are very well positioned to innovate with custom silicon programmable infrastructure capex build-outs at the edge and new thinking around system architectures but let's get back to the core story here and take a look at cloud adoptions you hear many marketing messages that call into question the public cloud at its recent think conference ibm ceo arvind krishna said that only about 25 of workloads had moved into the public cloud and he made the statement that you know this might surprise you implying you might think it should be much higher than that well we're not surprised by that figure especially especially if you narrow it to mission critical work which ibm does in its annual report actually we think that's probably high for mission critical work moving to the cloud we think it's a lot lower than that but regardless we think there are other ways to measure cloud adoption and this chart here from david michelle's book c seeing digital shows the adoption rates for major technological innovations over the past century and the number of years how many years it took to get to 50 percent household adoption electricity took a long time as did telephones had that infrastructure that last mile build out radios and tvs were much faster given the lower infrastructure requirements pcs actually took a long time and the web around nine years from when the mosaic browser was introduced we took a stab at estimating the pace of adoption of public cloud and and within a decade it reached 50 percent adoption in top enterprises and today that figures easily north of 90 so as we said at the top cloud adoption is actually quite strong and that adoption is driving massive growth for the public cloud now we've updated our quarterly cloud figures and want to share them with you here are our latest estimates for the big four cloud players with only alibaba left to report now remember only aws and alibaba report clean or relatively clean i ass figures so we use survey data and financial analysis to estimate the actual numbers for microsoft in google it's a subset of what they report in q121 we estimate that the big 4is and pas revenue approached 27 billion that's q121 that figure represents about 40 growth relative to q1 2020. so our trailing 12-month calculation puts us at 94 billion so we're now on roughly 108 billion dollar run rate as you may recall we've predicted that figure will surpass 115 billion by year end when it's all said and done aws it remains the leader amongst the big four with just over half of the market that's down from around 63 percent for the full year of 2018. unquestionably as we've reported microsoft they're everywhere they're ubiquitous in the market and they continue to perform very well but anecdotally customers and partners in our community continue to report to us that the quality of the aws cloud is noticeably better in terms of reliability and overall security etc but it doesn't seem to change the trajectory of the share movements as microsoft's software dominance makes doing business with azure really easy now as of this recording alibaba has yet to report but we'll update these figures once their earnings are released let's dig into the growth rates associated with these revenue figures and make some specific comments there this chart here shows the growth trajectory for each of the big four google trails the pack in revenue but it's growing faster than the others from of course a smaller base google is being very aggressive on pricing and customer acquisition to that we say good google needs to grow faster in our view and they most certainly can afford to be aggressive as we said combined the big four are growing revenue at 40 on a trailing 12-month basis and that compares with low single-digit growth for on-prem infrastructure and we just don't see this picture changing in the near to midterm like storage growth revenue from the big public cloud players is expected to outpace spending on traditional on on-prem platforms by at least 2 000 basis points for the foreseeable future now interestingly while aws is growing more slowly than the others from a much larger 54 billion run rate we actually saw sequential quarterly growth from aws and q1 which breaks a two-year trend from where aws's q1 growth rate dropped sequentially from q4 interesting now of course at aws we're watching the changing of the guards andy jassy becoming ceo of amazon adam silipsky boomeranging back to aws from a very successful stint at tableau and max peterson taking over for for aws public sector replacing teresa carlson who is now president and heading up go to market at splunk so lots of changes and we think this is actually a real positive for aws as it promotes from within we like that it taps previous amazon dna from tableau salesforce and it promotes the head of aws to run all of amazon a signal to us that amazon will dig its heels in and further resist calls to split aws from the mothership so let's dig in a little bit more to this repatriation mythbuster theme the revenue numbers don't tell the entire story so it's worth drilling down a bit more let's look at the demand side of the equation and pull in some etr survey data now to set this up we want to explain the fundamental method used by etr around its net score metric net score measures spending momentum and measures five factors as shown in this wheel chart that shows the breakdown of spending for the aws cloud it shows the percentage of customers within the platform that are either one adopting the platform new that's the lime green in this wheel chart two increasing spending by more than five percent that's the forest green three flat spending between plus or minus five percent that's the gray and four decreasing spend by six percent or more that's the pink and finally five replacing the platform that's the bright red now dare i say that the bright red is a proxy for or at least an indicator of repatriation sure why not let's say that now net score is derived by subtracting the reds from the greens anything above 40 percent we consider to be elevated aws is at 57 so very high not much sign of leaving the cloud nest there but we know it's nuanced and you can make an argument for corner cases of repatriation but come on the numbers just don't bear out that narrative let's compare aws with some of the other vendors to test this theory theory a bit more this chart lines up net score granularity for aws microsoft and google it compares that to ibm and oracle now other than aws and google these figures include the entire portfolio for each company but humor me and let's make an assumption that cloud defections are lower than the overall portfolio average because cloud has more momentum it's getting more spend spending so just stare at the red bars for a moment the three cloud players show one two and three percent replacement rates respectively but ibm and oracle while still in the single digits which is good show noticeably higher replacement rates and meaningfully lower new adoptions in the lime green as well the spend more category in the forest green is much higher within the cloud companies and the spend less in the pink is notably lower and you can see the sample sizes on the right-hand side of the chart we're talking about many hundreds over 1300 in the case of microsoft and if we look if we put hpe or dell in the charts it would say several hundred responses many hundreds it would look similar to ibm and oracle where you have higher reds a bigger fat middle of gray and lower greens it's just the way it is it shouldn't surprise anyone and it's you know these are respectable but it's just what happens with mature companies so if customers are repatriating there's little evidence here we believe what's really happening is that vendor marketing people are talking to customers who are purposefully spinning up test and dev work in the cloud with the intent of running a workload or portions of that workload on prem and when they move into production they're counting that as repatriation and they're taking liberties with the data to flood the market okay well that's fair game and all's fair in tech marketing but that's not repatriation that's experimentation or sandboxing or testing and deving it's not i'm leaving the cloud because it's too expensive or less secure or doesn't perform for me we're not saying that those things don't happen but it's certainly not visible in the numbers as a meaningful trend that should factor into buying decisions now we perfectly recognize that organizations can't just refactor their entire applications application portfolios into the cloud and migrate and we also recognize that lift and shift without a change in operating model is not the best strategy in real migrations they take a long time six months to two years i used to have these conversations all the time with my colleague stu miniman and i spoke to him recently about these trends and i wanted to see if six months at red hat and ibm had changed his thinking on all this and the answer was a clear no but he did throw a little red hat kool-aid at me saying saying that the way they think about the cloud blueprint is from a developer perspective start by containerizing apps and then the devs don't need to think about where the apps live whether they're in the cloud whether they're on prem where they're at the edge and red hat the story is brings a consistency of operations for developers and operators and admins and the security team etc or any plat on any platform but i don't have to lock in to a platform and bring that everywhere with me i can work with anyone's platform so that's a very strong story there and it's how arvin krishna plans to win what he calls the architectural battle for hybrid cloud okay so let's take a take a look at how the big cloud vendors stack up with the not so big cloud platforms and all those in between this chart shows one of our favorite views plotting net score or spending velocity on the vertical axis and market share or pervasiveness in the data set on the horizontal axis the red shaded area is what we call the hybrid zone and the dotted red lines that's where the elite live anything above 40 percent net score on the on on the vertical axis we consider elevated anything to the right of 20 on the horizontal axis implies a strong market presence and by those kpis it's really a two horse race between aws and microsoft now as we suggested google still has a lot of work to do and if they're out buying market share that's a start now you see alibaba shown in the upper left hand corner high spending momentum but from a small sample size as etr's china respondent level is obviously much lower than it is in the u.s and europe and the rest of apac now that shaded res red zone is interesting and gives credence to the other big non-cloud owning vendor narrative that is out there that is the world is hybrid and it's true over the past several quarters we've seen this hybrid zone performing well prominent examples include vmware cloud on aws vmware cloud which would include vcf vmware cloud foundation dell's cloud which is heavily based on vmware and red hat open shift which perhaps is the most interesting given its ubiquity as we were talking about before and you can see it's very highly elevated on the net score axis right there with all the public cloud guys red hat is essentially the switzerland of cloud which in our view puts it in a very strong position and then there's a pack of companies hovering around the 20 vertical axis level that are hybrid that by the way you see openstack there that's from a large telco presence in the data set but any rate you see hpe oracle and ibm ibm's position in the cloud just tells you how important red hat is to ibm and without that acquisition you know ibm would be far less interesting in this picture oracle is oracle and actually has one of the strongest hybrid stories in the industry within its own little or not so little world of the red stack hpe is also interesting and we'll see how the big green lake ii as a service pricing push will impact its momentum in the cloud category remember the definition of cloud here is whatever the customer says it is so if a cio says we're buying cloud from hpe or ibm or cisco or dell or whomever we take her or his word for it and that's how it works cloud is in the eye of the buyer so you have the cloud expanding into the domain of on-premises and the on-prem guys finally getting their proverbial acts together with hybrid that they've been talking about since 2009 but it looks like it's finally becoming real and look it's true you're not going to migrate everything into the cloud but the cloud folks are in a very strong position they are on the growth flywheel as we've shown they each have adjacent businesses that are data based disruptive and dominant whether it's in retail or search or a huge software estate they are winning the data wars as well that seems to be pretty clear to us and they have a leg up in ai and i want to look at that can we all agree that ai is important i think we can machine intelligence is being infused into every application and today much of the ai work is being done in the cloud as modeling but in the future we see ai moving to the edge in real time and real-time inferencing is a dominant workload but today again 90 of it is building models and analyzing data a lot of that work happens in the cloud so who has the momentum in ai let's take a look here's that same xy graph with the net score against market share and look who has the dominant mind share and position and spending momentum microsoft aws and google you can see in the table insert in the lower right hand side they're the only three in the data set of 1 500 responses that have more than 100 n aws and microsoft have around 200 or even more in the case of microsoft and their net scores are all elevated above the 60 percent level remember that 40 percent that red line indicates the elevation mark the high elevation mark so the hyperscalers have both the market presence and the spend momentum so we think the rich get richer now they're not alone there are several companies above the 40 line databricks is bringing ai and data science to the world of data lakes with its managed services and it's executing very well salesforce is infusing infusing ai into its platform via einstein you got sap on there anaconda is kind of the gold standard that platform for data science and you can see c3 dot ai is tom siebel's company going after enterprise ai and data robot which like c3 ai is a small sample in the data set but they're highly elevated and they're simplifying machine learning now there's ibm watson it's actually doing okay i mean sure we'd like to see it higher given that ginny rometty essentially bet ibm's future on watson but it has a decent presence in the market and a respectable net score and ibm owns a cloud so okay at least it's a player not the dominance that many had hoped for when watson beat ken jennings in jeopardy back 10 years ago but it's okay and then is oracle they're now getting into the act like it always does they want they watched they waited they invested they spent money on r d and then boom they dove into the market and made a lot of noise and acted like they invented the concept oracle is infusing ai into its database with autonomous database and autonomous data warehouse and look that's what oracle does it takes best of breed industry concepts and technologies to make its products better you got to give oracle credit it invests in real tech and it runs the most mission critical apps in the world you can hate them if you want but they smoke everybody in that game all right let's take a look at another view of the cloud players and see how they stack up and where the big spenders live in the all-important fortune 500 this chart shows net score over time within the fortune 500 aws is particularly interesting because its net score overall is in the high 50s but in this large big spender category aws net score jumps noticeably to nearly 70 percent so there's a strong indication that aws the largest player also has momentum not just with small companies and startups but where it really counts from a revenue perspective in the largest companies so we think that's a very positive sign for aws all right let's wrap the realities of cloud repatriation are clear corner cases exist but it's not a trend to take to the bank although many public cloud users may think about repatriation most will not act on it those that do are the exception not the rule and the etr data shows that test and dev in the clouds is part of the cloud operating model even if the app will ultimately live on prem that's not repatriation that's just smart development practice and not every workload is will or should live in the cloud hybrid is real we agree and the big cloud players know it and they're positioning to bring their stacks on prem and to the edge and despite the risk of a lock-in and higher potential monthly bills and concerns over control the hyperscalers are well com positioned to compete in hybrid to win hybrid the legacy vendors must embrace the cloud and build on top of those giants and add value where the clouds aren't going to or can't or won't they got to find places where they can move faster than the hyperscalers and so far they haven't shown a clear propensity to do that hey that's how we see it what do you think okay well remember these episodes are all available as podcasts wherever you listen you do a search breaking analysis podcast and please subscribe to the series check out etr's website at dot plus we also publish a full report every week on wikibon.com and siliconangle.com a lot of ways to get in touch you can email me at david.velante at siliconangle.com or dm me at dvalante on twitter comment on our linkedin post i always appreciate that this is dave vellante for the cube insights powered by etr have a great week everybody stay safe be well and we'll see you next time you

Published Date : May 15 2021

SUMMARY :

and the spend momentum so we think the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
50 percentQUANTITY

0.99+

50 percentQUANTITY

0.99+

six percentQUANTITY

0.99+

microsoftORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

115 billionQUANTITY

0.99+

94 billionQUANTITY

0.99+

40 percentQUANTITY

0.99+

12-monthQUANTITY

0.99+

54 billionQUANTITY

0.99+

five factorsQUANTITY

0.99+

two-yearQUANTITY

0.99+

two yearsQUANTITY

0.99+

amazonORGANIZATION

0.99+

40QUANTITY

0.99+

u.sLOCATION

0.99+

awsORGANIZATION

0.99+

six monthsQUANTITY

0.99+

1 500 responsesQUANTITY

0.99+

57QUANTITY

0.99+

teresa carlsonPERSON

0.99+

three percentQUANTITY

0.99+

27 billionQUANTITY

0.99+

ibmORGANIZATION

0.99+

david michellePERSON

0.99+

twoQUANTITY

0.99+

more than five percentQUANTITY

0.99+

europeLOCATION

0.99+

ciscoORGANIZATION

0.99+

more than 100QUANTITY

0.99+

20QUANTITY

0.99+

siliconangle.comOTHER

0.99+

two horseQUANTITY

0.99+

more than a hundred billionQUANTITY

0.99+

bostonLOCATION

0.99+

googleORGANIZATION

0.99+

arvin krishnaPERSON

0.98+

last yearDATE

0.98+

2009DATE

0.98+

90QUANTITY

0.98+

todayDATE

0.98+

dave vellantePERSON

0.98+

dellORGANIZATION

0.98+

40 lineQUANTITY

0.98+

oracleORGANIZATION

0.98+

100QUANTITY

0.98+

ceoPERSON

0.98+

around 200QUANTITY

0.97+

10 years agoDATE

0.97+

capexORGANIZATION

0.96+

hpeORGANIZATION

0.96+

q1 2020DATE

0.96+

around 63 percentQUANTITY

0.96+

20 verticalQUANTITY

0.95+

each companyQUANTITY

0.95+

2018DATE

0.95+

max petersonPERSON

0.95+

dot plusORGANIZATION

0.94+

twitterORGANIZATION

0.94+

watsonORGANIZATION

0.94+

oneQUANTITY

0.94+

q4DATE

0.94+

palo altoORGANIZATION

0.93+

nearly 70 percentQUANTITY

0.93+

Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 10 2021

SUMMARY :

This is breaking analysis and did a lot of the charts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FoyerPERSON

0.99+

David MoschellaPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Patrick MoorheadPERSON

0.99+

Tom SiebelPERSON

0.99+

New EnglandLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CNETORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MIT Technology ReviewORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10%QUANTITY

0.99+

five minutesQUANTITY

0.99+

TeslaORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

OracleORGANIZATION

0.99+

BostonLOCATION

0.99+

95%QUANTITY

0.99+

40%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AdobeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

last weekDATE

0.99+

99%QUANTITY

0.99+

ETRORGANIZATION

0.99+

dave.vellante@siliconangle.comOTHER

0.99+

John FurrierPERSON

0.99+

EE TimesORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

10XQUANTITY

0.99+

last weekDATE

0.99+

MoschellaPERSON

0.99+

theCUBEORGANIZATION

0.98+

IntelORGANIZATION

0.98+

15 nanometerQUANTITY

0.98+

2015DATE

0.98+

todayDATE

0.98+

Seeing DigitalTITLE

0.98+

30%QUANTITY

0.98+

HPEORGANIZATION

0.98+

this weekDATE

0.98+

A14COMMERCIAL_ITEM

0.98+

higher than 118%QUANTITY

0.98+

5%QUANTITY

0.97+

10 years agoDATE

0.97+

EinORGANIZATION

0.97+

a buckQUANTITY

0.97+

64 bitQUANTITY

0.97+

C3.aiTITLE

0.97+

DatabricksORGANIZATION

0.97+

about 40%QUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

DataikuORGANIZATION

0.95+

siliconangle.comOTHER

0.94+

Sebastien De Halleux, Saildrone | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Well, welcome back here on theCUBE. We're at AWS re:Invent 2019. And every once in a while, we have one of these fascinating interviews that really reaches beyond the technological prowess that's available today into almost the human fascination of work, and that's what we have here. >> Big story. >> Dave Vellante, John Walls. We're joined by Sebastien De Halleux, who is the CEO, oh, COO, rather, of a company called Saildrone, and what they feature is wind-powered flying robots, and they've undertaken a project called Seabed 2030 that will encompass mapping the world's oceans. 85% of the oceans, we know nothing about. >> That's right. >> And, yeah, they're going to combine this tremendous technology with 100 of these flying drones. So, Sebastien, we're really excited to have you here. Thanks for joining us, and wow, what a project! So, just paint the high-level view, I mean, not to have a pun here, but just to share with folks at home a little bit about the motivation of this and what gap you're going to fill. Then we'll get into the technology. >> So I think, you know, the first question is to realize the role of oceans and how they affect you on land and all of us. Half the air you breathe, half the oxygen you breathe, comes from the ocean. They cover 70% of the planet and drive global weather, they drive all the precipitation. They also drive sea-level rise, which affects coastal communities. They provide 20% of the protein, all the fish that we all eat. So, you know, it's a very, very important survival system for all of us on land. The problem is, it's also a very hostile environment, very dangerous, and so, we know very little about it. Because we study it with a few ships and buoys, but that's really a few hundred data points to cover 70% of the planet, whereas on land, we have billions of data points that are connected. So, that's why we're trying to fundamentally address, is deploying sensors in the ocean using autonomous surface vehicles, what we call Saildrones, which are essentially, think of them as autonomous sailboats, seven meters, 23 feet, long, bright orange thing with a five-meter-tall sail, which is harnessing wind power for propulsion and solar power for the onboard electronics. >> And then you've got sonar attached to that, that is what's going to do the-- >> The mapping itself. >> The underwater mapping, right, so you can look for marine life, you can look for geographical or topographical anomalies and whatever, and so, it's a multidimensional look using this sonar that, I think, is powered down to seven kilometers, right? >> That's right. >> So that's how far down, 20,000, 30,000 feet. >> That's right. >> So you're going to be able to derive information from it. >> You essentially describe it as, you're painting the ocean with sound. >> That's absolutely right, whereas if you wanted to take a picture of land, you could fly an airplane or satellite and take a photograph, light does not travel through water that well. And so, we use sound instead of light, but the same principle, which is that we send those pulses of sound down, and the echo we listen to from the seabed, or from fish or critters in the water column. And so, yes, we paint the ocean with sound, and then we use machine learning to transform this data into biomass, statistical biomass distribution, for example, or a 3-D surface of the seabed, after processing the sound data. >> And you have to discern between different objects, right? I mean, you (laughs) showed one picture of a seal sunbathing on one of these drones, right? Or is there a boat on the horizon? How do you do that? >> It's an extremely hard problem, because if a human is at sea looking through binoculars at things on the horizon, you're going to become seasick, right? So imagine the state of the algorithm trying to process this in a frame where every pixel is moving all the time, unlike on land, where you have at least a static frame of reference. So it's a very hard problem, and one of the first problems is training data. Where do you get all this training data? So our drones, hundreds of drones, take millions of pictures of the ocean, and then we train the algorithm using either labeled datasets or other source of data, and we teach them what is a boat on the horizon, what does that look like, and what's a bird, what's a seal. And then, in some hard cases, when you have a whale under the Saildrone or a seal lying on it, we have a lot of fun pushing it on our blog and asking the experts to really classify it. (Dave and John laugh) You know, what are we looking at? Well, you see a fin, is it a shark? Is it a dolphin? Is it a whale? It can get quite heated. >> I hope it's a dolphin, I hope it's a dolphin. (Sebastien laughs) All right, so, I want to get into the technology, but I'm just thinking about the practical operation of this. They're wind-powered. >> Sebastien: Yes. >> But they just can't go on forever, right? I mean, they have to touch down at some point somehow, right? They're going to hit water. How do you keep this operational when you've got weather situations, you've got some days maybe where wind doesn't exist or there's not enough there to keep it upright, keep it operational, I mean. >> It's a very good question. I mean, the ocean is often described as one of the toughest environments in the universe, because you have corrosive force, you have pounding waves, you have things you can hit, marine mammals, whales who can breach on you, so it's a very hard problem. They leave the dock on their own, and they sail around the world for up to a year, and then they come back to the same dock on their own. And they harvest all of their energy from the environment. So, wind for propulsion, and there's always wind on the ocean. As soon as you have a bit of pressure differential, you have wind. And then, sunlight and hydrogeneration for electrical power, which powers the onboard computers, the sensors, and the satellite link that tells it to get back to shore. >> It's all solar-powered. >> Exactly, so, no fuel, no engine, no carbon emission, so, a very environmentally friendly solution. >> So, what is actually on them, well, first of all, you couldn't really do this without the cloud, right? >> That's right. >> And maybe you could describe why that is. And I'm also interested in, I mean, it's the classic edge use case. >> Sure, the ultimate edge. >> I mean, if you haven't seen Sebastien's keynote, you got to. There's just so many keynotes here, but it should be on your top 10 list, so Google Saildrone keynote AWS re:Invent 2019 and watch it. It was really outstanding. >> Sebastien: Thank you. >> But help us understand, what's going on in the cloud and what's going on on the drone? >> So it is really an AWS-powered solution, because the drones themselves have a low level of autonomy. All they know how to do is to go from Point A to Point B and take wave, current, and wind into consideration. All the intelligence happens shoreside. So, shoreside, we crunch huge amounts of datasets, numerical models that describe pressure field and wind and wave and current and sea ice and all kinds of different parameters, we crunch this, we optimize the route, and we send those instructions via satellite to the vehicle, who then follow the mission plan. And then, the vehicle collects data, one data point every second, from about 25 different sensors, and sends this data back via satellite to the cloud, where it's crunched into products that include weather forecasts. So you and I can download the Saildrone Forecast app and look at a very beautiful picture of the entire Earth, and look at, where is it going to rain? Where is it going to wind? Should I have my barbecue outside? Or, is a hurricane coming down towards my region? So, this entire chain, from the drone to the transmission to the compute to the packaging to the delivery in near real time into your hand, is all done using AWS cloud. >> Yeah, so, I mean, a lot of people use autonomous vehicles as the example and say, "Oh, yeah, that could never be done in the cloud," but I think we forget sometimes, there are thousands of use cases where you don't need, necessarily, that real-time adjustment like you do in an autonomous vehicle. So, your developers are essentially interacting with the cloud and enabling this, right? >> Absolutely, so we are, as I said, really, the foundation for our data infrastructure is AWS, and not just for the data storage, we're talking about petabytes and petabytes of data if you think about mapping 70% of the world, right, but also on the compute side. So, running weather models, for example, requires supercomputers, and this is how it's traditionally done, so our team has taken those supercomputing jobs and brought them into AWS using all the new instances like C3 and C5 and P3, and all this high-performance computing, you can now move from old legacy supercomputers into the cloud, and so, that really is an amazing new capability that did not exist even five years ago. >> Sebastien, did you ever foresee the day where you might actually have some compute locally, or even some persistent-- >> So on the small Saildrones, which is the majority of our fleet, which is going to number a thousand Saildrones at scale, there is very little compute, because the amount of electrical power available is quite low. >> Is not available, yeah. >> However, on the larger Saildrone, which we announced here, which is called the Surveyor-- >> How big, 72 feet, yeah. >> Which is a 72-foot machine, so this has a significant amount of compute, and it has onboard machine learning and onboard AI that processes all the sonar data to send the finished product back to shore. Because, you know, no matter how fast satellite connectivity's evolving, it's always a small pipe, so you cannot send all the raw data for processing on shore. >> I just want to make a comment. So people often ask Andy Jassy, "You say you're misunderstood. "What are you most misunderstood about?" I think this is one of the most misunderstood things about AWS. The edge is going to be won by developers, and Amazon is basically taking its platform and allowing it to go to the edge, and it's going to be a programmable edge, and that's why I really love the strategy. But please, yeah. >> Yeah, no, we talked about this project, you know, Seabed 2030, but you talked about weather forecasts, and whatever. Your client base already, NASA, NOAA, research universities, you've got an international portfolio. So, you've got a whole (laughs) business operation going. I don't want to give people at home the idea that this is the only thing you have going on. You have ongoing data collection and distribution going on, so you're meeting needs currently, right? >> That's right, we supply governments around the world, from the U.S. government, of course, to Canada, Mexico, Japan, Australia, the European Union, well, you name it. If you've got a coastline, you've got a data problem. And no government has ever come and told us, "We have enough ships or enough data on the oceans." And so, we are really servicing a global user base by using this infrastructure that can provide you a thousand times more data and a whole lot of new insights that can be derived from that data. >> And what's your governance structure? Are you a commercial enterprise, or are you going-- >> We are a commercial enterprise, yes, we're based in San Francisco. We're backed by long-term impact venture capital. We've been revenue-generating since day one, and we just offer a tremendous amount of value for a much cheaper cost. >> You used the word impact. There's a lot of impact funds that are sort of emerging now. At the macro, talk about the global impact that you guys hope to have, and the outcome that you'd like to see. >> Yeah, you know, our planetary data is all about understanding things that impact humanity, right? Right now, here at home, you might have a decent weather forecast, but if you go to another continent, would that still be the case? Is there an excuse for us to not address this disparity of information and data? And so, by running global weather model and getting global datasets, you can really deliver an impact at very low marginal cost for the entire global population with the same level of quality that we enjoy here at home. That's really an amazing kind of impact, because, you know, rich and developed nations can afford very sophisticated infrastructure to count your fish and establish fishing quarters, but other countries cannot. Now, they can, and this is part of delivering the impact, it's leveraging this amazing infrastructure and putting it in the hands, with a simple product, of someone whether they live on the islands of Tuvalu or in Chicago. >> You know, it's part of our mission to share stories like this, that's how we have impact, so thank you so much for-- >> I mean, we-- >> The work that you're doing and coming on theCUBE. >> This is cool. We talk about data lakes, this is data oceans. (Dave laughs) This is big-time stuff, like, serious storage. All right, Sebastien, thank you. Again, great story, and we wish you all the best and look forward to following this for the next 10 years or so. Seabed 2030, check it out. Back with more here from AWS re:Invent 2019. You're watching us live, right here on theCUBE. (upbeat pop music)

Published Date : Dec 7 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, into almost the human fascination of work, 85% of the oceans, we know nothing about. a little bit about the motivation of this Half the air you breathe, half the oxygen So that's how far down, be able to derive information from it. You essentially describe it as, to take a picture of land, you could fly an airplane And then, in some hard cases, when you have a whale All right, so, I want to get into the technology, How do you keep this operational and then they come back to the same dock on their own. so, a very environmentally friendly solution. And maybe you could describe why that is. I mean, if you haven't seen So you and I can download the Saildrone Forecast app of use cases where you don't need, is AWS, and not just for the data storage, So on the small Saildrones, which is the majority so you cannot send all the raw data for processing on shore. and allowing it to go to the edge, that this is the only thing you have going on. the European Union, well, you name it. and we just offer a tremendous amount and the outcome that you'd like to see. and getting global datasets, you can really and coming on theCUBE. Again, great story, and we wish you all the best

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sebastien De HalleuxPERSON

0.99+

SebastienPERSON

0.99+

Dave VellantePERSON

0.99+

NASAORGANIZATION

0.99+

DavePERSON

0.99+

NOAAORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy JassyPERSON

0.99+

AmazonORGANIZATION

0.99+

23 feetQUANTITY

0.99+

72-footQUANTITY

0.99+

ChicagoLOCATION

0.99+

AWSORGANIZATION

0.99+

20%QUANTITY

0.99+

seven metersQUANTITY

0.99+

San FranciscoLOCATION

0.99+

JohnPERSON

0.99+

85%QUANTITY

0.99+

John WallsPERSON

0.99+

EarthLOCATION

0.99+

72 feetQUANTITY

0.99+

70%QUANTITY

0.99+

seven kilometersQUANTITY

0.99+

millions of picturesQUANTITY

0.99+

IntelORGANIZATION

0.99+

first questionQUANTITY

0.99+

hundreds of dronesQUANTITY

0.99+

20,000, 30,000 feetQUANTITY

0.99+

Las VegasLOCATION

0.98+

one pictureQUANTITY

0.98+

TuvaluLOCATION

0.98+

five years agoDATE

0.98+

European UnionORGANIZATION

0.97+

about 25 different sensorsQUANTITY

0.97+

one data pointQUANTITY

0.97+

oneQUANTITY

0.97+

billions of data pointsQUANTITY

0.96+

Saildrone ForecastTITLE

0.96+

SaildroneORGANIZATION

0.96+

up to a yearQUANTITY

0.93+

U.S. governmentORGANIZATION

0.93+

C5COMMERCIAL_ITEM

0.92+

a thousand timesQUANTITY

0.92+

C3COMMERCIAL_ITEM

0.91+

every secondQUANTITY

0.91+

every pixelQUANTITY

0.9+

GoogleORGANIZATION

0.89+

Half the airQUANTITY

0.89+

P3COMMERCIAL_ITEM

0.89+

five-meter-tallQUANTITY

0.87+

todayDATE

0.87+

SaildronePERSON

0.86+

next 10 yearsDATE

0.86+

100 of these flying dronesQUANTITY

0.82+

SaildronesCOMMERCIAL_ITEM

0.81+

CanadaLOCATION

0.8+

thousands of use casesQUANTITY

0.8+

first problemsQUANTITY

0.8+

re:Invent 2019TITLE

0.78+

hundred data pointsQUANTITY

0.75+

SaildroneCOMMERCIAL_ITEM

0.75+

re:Invent 2019EVENT

0.73+

70% of the planetQUANTITY

0.7+

JapanORGANIZATION

0.68+

day oneQUANTITY

0.66+

half the oxygenQUANTITY

0.65+

re:InventEVENT

0.65+

2030COMMERCIAL_ITEM

0.64+

Invent 2019EVENT

0.62+

SeabedORGANIZATION

0.59+

Seabed 2030TITLE

0.55+

top 10QUANTITY

0.54+

MexicoLOCATION

0.52+

AustraliaLOCATION

0.52+

2019TITLE

0.5+

a thousandQUANTITY

0.48+

Action Item Quick Take | George Gilbert - Feb 2018


 

(upbeat music) >> Hi, this is Peter Burris with another Wikibon Action Item Quick Take. George Gilbert, everybody's talking about AI, ML, DL, as though, like always, the stack's going to be highly disaggregated. What's really happening? >> Well, the key part for going really mainstream is that we're going to have these embedded in the fabric of applications, high volume applications. And right now, because it's so bespoke, it can only really be justified in strategic applications. But the key scarce resource of the data scientists, we can look at them as the new class of developer, but they're very different from the old class of developer. I mean, they need entirely different schooling and training and tools. So, the closest we can get to the completely bespoke apps are at the big tech companies for the most part, or tech-centric companies like Adtec and Fintec. But beyond that, when you're trying to get these out to more mainstream, but still sophisticated, customers, we have platforms like C3 or IBM's Watson IoT, where they're templates, but you work intensively between the vendor and the customer. It's going to be a while before we see them as widespread components of legacy enterprise apps, because those apps actually are keeping the vendors and the customers busy trying to move them to the cloud. They were heavily customized and you can't really embed the machine learning apps in those while they're sort of, so customized, because they need a certain amount of data, and they evolve very quickly, where as the systems of record are very, very rigid. >> All right, once again, thank you very much George. This has been Peter Burris with another Wikibon Action Item Quick Take. (upbeat music)

Published Date : Feb 23 2018

SUMMARY :

like always, the stack's going to be highly disaggregated. But the key scarce resource of the data scientists, with another Wikibon Action Item Quick Take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

AdtecORGANIZATION

0.99+

FintecORGANIZATION

0.99+

George GilbertPERSON

0.99+

Feb 2018DATE

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

C3TITLE

0.93+

Watson IoTTITLE

0.89+

WikibonORGANIZATION

0.87+

Sanjay Poonen, VMware | AWS re:Invent


 

>> Narrator: Live from Las Vegas it's theCube covering AWS reInvent 2017 presented by AWS, Intel and our ecosystem of partners. >> Hello and welcome to theCube's exclusive coverage here in Las Vegas for AWS, Amazon Web Services reinvent 2017, 45,000 people. It's theCube's fifth year in covering AWS, five years ago I think 7,000 people attended, this year close to 45,000, developers and industry participants. And of course this is theCube I'm John Furrier with my co-host Keith Townsend and we're excited to have Cube alumni Sanjay Poonen who's the chief operating officer for VMware. Sanjay great to see you, of course a good friend with Andy Jassy, you went to Harvard Business School together, both Mavericks, welcome to theCube. >> Thank you and you know what I loved about the keynote this morning? Andy and I both love music. And he had all these musical stuff man. He had Tom Petty, he had Eric Clapton. I an not sure I like all of his picks but at least those two, loved it man. >> The music thing really speaks to the artists, artists inside of this industry. >> Yes. >> And we were talking on theCube earlier that, we're in a time now where and I think Tom Siebel said it when he was on, that there's going to be a mass, just extinction of companies that don't make it on the digital transformation and he cited some. You're at VMware you guys are transforming and continue to do well, you've a relationship with Amazon Web Services, talk about the challenge that's in front of business executives right now around this transformation because possibly looking at extinction for some big brands potentially big companies in IT. >> It's interesting that Tom Siebel would say that in terms of where Siebel ended up and where salespersons now I respect him, he's obviously doing good things at C3. But listen that's I think what every company has got to ask itself, how do you build longevity? How do you make yourself sustainable? Next year will be our 20 year anniversary of VMware's founding. The story could have been written about VMware that you were the last good company and then you were a legacy company because you were relevant to yesterday's part of the world which was the data center. And I think the key thing that kept us awake the last two or three years was how do you make them relevant to the other side of history which is the public cloud? What we've really been able to do over the last two or three years is build a story of the company that's not just relevant to the data center and private cloud, which is not going away guys as you know but build a bridge into the public cloud and this partnership has been a key part of that and then of course the third part of that is our end user computing story. So I think cloud mobile security have become the pillars of the new VMware and we're very excited about that and this show, I mean if you combine the momentum of this show and VMworld, collectively at VMworld we have probably about 70, 80,000 people who come to VMworld and Vforums, there's 45,000 people here with all the other summits, there's probably have another 40,000 people, this is collectively about a 100, 150,000 people are coming to the largest infrastructure shows on the planet great momentum. >> And as an infrastructure show that's turning into a developer show line get your thoughts and I want to just clarify something 'cause we pointed this out at VMworld this year because it's pretty obvious what happened. The announcement that you guys did that Ragu and your team did with Ragu with AWS was instrumental. The proof was at VMworld where you saw clarity in the messaging. Everyone can see what's going on. I now know what's happening, my operations are gonna be secure, I can run VSphere on the cloud or on Prem, everything could be called what it is. But the reality was is that you guys have the operators, IT operations and Amazon has a robust cloud native developer community, not that they're conflicting in any way, they're coming together so it was a smart move so I got to ask you, as you guys continue your relationship with AWS, how are you guys tying the new ops role, ops teams with the dev teams because with IoT, this is where it's coming together you can see it right there? Your thoughts? >> I mean listen, the partnership is going great. I just saw Andy Jassy after his exec summit session, gave him a hug. We're very excited about it and I think of any of the technology vendors he mentioned on stage, we were on several slides there, mentioned a few times. I think we're probably one of the top tech partners of his and reality is, there's two aspects to the story. One is the developer and operations come together which you, you eloquently articulated. The other aspect is, we're the king of the private cloud and they're the king of the public cloud, when you can bring these together, you don't have to make it a choice between one or the other, we want to make sure that the private cloud is maximized to its full extent and then you build a bridge into the public cloud. I think those two factors, bringing developer and operations together and marrying the private and public cloud, what we call hybrid cloud computing, a term we coined and now of course many others-- >> I think-- >> On top of the term. Well whoever did. >> I think HP might have coined it. >> But nonetheless, we feel very good about the future about developer and operations and hybrid cloud computing being a good part of the world's future. >> Sanjay, I actually interviewed you 2016 VMworld and you said something very interesting that now I look back on it I'm like, "Oh of course." Which is that, you gave your developers the tools they needed to do their jobs which at the time included AWS before the announcement of VMware and AWS partnership. AWS doesn't change their data center for anyone so the value that obviously you guys are bringing to them and their customers speaks volumes. AWS has also said, Andy on stage says, he tries to go out and talk to customers every week. I joked that before the start of this that every LinkedIn request I get, you're already a connection of that LinkedIn request. How important is it for you to talk to your internal staff as well as your external customers to get the pulse of this operations and developer movement going and infused into the culture of VMware. >> Well Keith I appreciate the kind words. When we decided who to partner with and how to partner with them, when we had made the announcement last year, we went and talked to our customers. We're very customer and client focused as are they. And we began to hear a very proportional to the market share stats, AWS most prominently and every one of our customers were telling us the same thing that both Andy and us were asking which is "Why couldn't you get the best of both worlds? "You're making a choice." Now we had a little bit of an impediment in the sense that we had tried to build a public cloud with vCloud air but once we made the decision that we were getting out of that business, divested it, took care of those clients, the door really opened up and we started to test pulse with a couple of customers under NDA. What if you were to imagine a partnership between us and Amazon, what would you think? And man, I can tell you, a couple of these customers some of who are on stage at the time of the announcement, fell off their chair. This would be huge. This is going to be like a, one customer said it's gonna be like a Berlin Wall moment, the US and the Soviet Union getting together. I mean the momentum building up to it. So now what we've got to do, it's been a year later, we've shipped, released, the momentum still is pretty high there, we've gotta now start to really make this actionable, get customers excited. Most of my meetings here have been with customers. System integrators that came from one of the largest SIs in the world. They're seeing this as a big part of the momentum. Our booth here is pretty crowded. We've got to make sure now that the customers can start realizing the value of VMware and AWS as a build. The other thing that as you mentioned that both sides did very explicitly in the design of this was to ensure that each other's engineering teams were closely embedded. So it's almost like having an engineering team of VMware embedded inside Amazon and an engineering team of Amazon embedded inside VMware. That's how closely we work together. Never done before in the history of both companies. I don't think they've ever done it with anybody else, certainly the level of trying. That represents the trust we had with each other. >> Sanjay, I gotta ask you, we were talking with some folks last night, I was saying that you were coming on theCube and I said, "What should I ask Sanjay? "I want to get him a zinger, "I want to get him off as messaging." Hard to do but we'll try. They said, "Ask him about security." So I gotta ask you, because security has been Amazon's kryptonite for many years. They've done the work in the public sector, they've done the work in the cloud with security and it's paying off for them. Security still needs to get solved. It's a solvable problem. What is your stance on security now that you got the private and hybrid going on with the public? Anything change? I know you got the AirWatch, you're proud of that but what else is going on? >> I think quietly, VMware has become one of the prominent brands that have been talked about in security. We had a CIO survey that I saw recently in network security where increasingly, customers are talking about VMware because of NSX. When I go to the AirWatch conference I look at the business cards of people and they're all in the security domain of endpoint security. What we're finding is that, security requires a new view of it where, it can't be 6000 vendors. It feels like a strip mall where every little shop has got its boutique little thing that you ought to buy and when you buy a car you expect a lot of the things to be solved in the core aspects of the car as opposed to buying a lot of add-ons. So our point of view first off is that security needs to baked into the infrastructure, and we're gonna do that. With products like NSX that bake it into the data center, with products like AirWatch and Workspace ONE that bake it into the endpoint and with products like App Defence that even take it deeper into the core of the hypervisor. Given that we've begun to also really focus our education of customers on higher level terms, I was talking to a CIO yesterday who was educating his board on what are some of the key things in cyber security they need to worry about. And the CIO said this to me, the magic word that he is training all of his board members on, is segmentation. Micro segmentation segmentation is a very simple concept that NSX sort of pioneered. We'll finding that now to become very relevant. Same-- >> So that's paying off? >> Paying up big time. WannaCry and Petya taught us that, patching probably is a very important aspect of what people need to do. Encryption, you could argue a lot of what happened in the Equifax may have been mitigated if the data been encrypted. Identity, multi-factor authentication. We're seeing a couple of these key things being hygiene that we can educate people better on in security, it really is becoming a key part to our stories now. >> And you consider yourself top-tier security provider-- >> We are part of an ecosystem but our point of view in security now is very well informed in helping people on the data center to the endpoint to the cloud and helping them with some of these key areas. And because we're so customer focused, we don't come in at this from the way a traditional security players providing access to and we don't necessarily have a brand there but increasingly we're finding with the success of NSX, Workspace ONE and the introduction of new products like App Defense, we're building a point of security that's highly differentiated and unique. >> Sanjay big acquisition in SD-WAN space. Tell us how does that high stress security player and this acquisition in SD-WAN, the edge, the cloud plays into VMware which is traditionally a data center company, SD-wAN, help us understand that acquisition. >> Good question. >> As we saw the data center and the cloud starting to develop that people understand pretty well. We began to also hear and see another aspect of what people were starting to see happen which was the edge and increasingly IoT is one driver of that. And our customers started to say to us, "Listen if you're driving NSX and its success "in the data center, wouldn't it be good "to also have a software-defined wide area network strategy "that allows us to take that benefit of networking, "software-defined networking to the branch, to the edge?" So increasingly we had a choice. Do we build that ourselves on top of NSX and build out an SD-WAN capability which we could have done or do we go and look at our customers? For example we went and talked to telcos like AT&T and they said the best solution out there is a company that can develop cloud. We start to talk to customers who were using them and we analyzed the space and we felt it would be much faster for us to buy rather than build a story of a software-defined networking story that goes from the data center to the branch. And VeloCloud was well-regarded, I would view this, it's early and we haven't closed the acquisition as yet but once we close this, this has all the potential to have the type of transformative effect like in AirWatch or in nai-si-ra-hat in a different way at the edge. And we think the idea of edge core which is the data center and cloud become very key aspects of where infrastructure play. And it becomes a partnership opportunity. VeloCloud will become a partnership opportunity with the telcos, with the AWSs of the world and with the traditional enterprises. >> So bring it all together for us. Data center, NSX, Edge SD-WAN, AirWatch capability, IOT, how does all of that connect together? >> You should look at IoT and Edge being kind of related topics. Data center and the core being related topics, cloud being a third and then of course the end-user landscape and the endpoint being where it is, those would be the four areas. Data center being the core of where VMware started, that's always gonna be and our stick there so to speak is that we're gonna take what was done in hardware and do it in software significantly cheaper, less complex and make a lot of money there. But then we will help people bridge into the cloud and bridge into the edge, that's the core part of our strategy. Data center first, cloud, edge. And then the end user world sits on top of all of that because every device today is either a phone, a tablet or a laptop and there's no vendor that can manage the heterogeneous landscape today of Apple devices, Google devices, Apple being iOS and Mac, Android, Chrome in the case of Google, or Windows 10 in the case of Microsoft. That heterogeneous landscape, managing and securing that which is what AirWatch and Workspace ONE does is uniquely ours. So we think this proposition of data center, cloud, edge and end-user computing, huge opportunity for VMware. >> Can we expect to see NSX as the core of that? >> Absolutely. NSX becomes to us as important as ESX was, in fact that's kind of why we like the name. It becomes the backbone and platform for everything we do that connects the data center to the cloud, it's a key part of BMC for example. It connects the data center to the edge hence what we've done with SD-WAN and it's also a key part to what connects to the end user world. When you connect network security with what we're doing with AirWatch which we announced two years ago, you get magic. We think NSX becomes a fundamental and we're only in the first or second or third inning of software-defined networking. We have a few thousand customers okay of NSX, that's a fraction of the 500,000 customers of VMware. We think we can take that in and the networking market is an 80 billion dollar market ripe for a lot of innovation. >> Sanjay, I want to get your perspective on the industry landscape. Amazon announcing results, I laid it out on my Forbes story and in Silicon Angle all the coverage, go check it out but basically is, Amazon is going so fast the developers are voting with their workloads so their cloud thing is the elastic cloud, they check, they're winning and winning. You guys own the enterprised data center operating model which is private cloud I buy that but it's all still one cloud IoT, I like that. The question is how do you explain it to the people that don't know what's going on? Share your color on what's happening here because this is a historic moment. It's a renaissance-- >> I think listen, when I'm describing this to my wife or to my mother or somebody who's not and say "There's a world of tech companies "that applies to the consumer." In fact when I look at my ticker list, I divide them on consumer and enterprise. These are companies like Apple and Google and Facebook. They may have aspirations in enterprise but they're primarily consumer companies and those are actually what most people can relate to and those are now some of the biggest market cap companies in the world. When you look at the enterprise, typically you can divide them into applications companies, companies like Salesforce, SAP and parts of Oracle and others, Workday and then companies in infrastructure which is where companies like VMware and AWS and so on fit. I think what's happening is, there's a significant shift because of the cloud to a whole new avenue of spending where every company has to think about themselves as a technology company. And the same thing's happening with mobile devices. Cloud mobile security ties many of those conversations together. And there are companies that are innovators and there companies that you described earlier John at the start of this show that's going to become extinct. >> My thesis is this, I want to get your reaction to this. I believe a software renaissance is coming and it's gonna be operated differently and you guys are already kind of telegraphing your move so if that's the case, then a whole new guard is gonna be developing, he calls it the new garden. Old guard he refers to kind of the older guards. My criticism of him was is that he put a Gartner slide up there, that is as says old guard as you get. Andy's promoting this whole new guard thing yet he puts up the Gartner Magic Quadrant for infrastructure as a service, that's irrelevant to his entire presentation, hold on, the question is about you know I'm a Gardner-- >> Before I defend him. >> They're all guard, don't defend him too fast. I know the buyers see if they trust Gartner, maybe not. The point is, what are the new metrics? We need new metrics because the cloud is horizontally scalable. It's integrated. You got software driving decision making, it's not about a category, it's about a fabric. >> I'm not here to... I'm a friend of Andy, I love what he talked about and I'm not here to defend or criticize Gartner but what I liked about his presentation was, he showed the Gartner slide probably about 20 minutes into the presentation. He started off by his metrics of revenue and number of customers. >> I get that, show momentum, Gartner gives you like the number one-- >> But the number of customers is what counts the most. The most important metric is adoption and last year he said there was about a million customers this year he said several million. And if it's true that both startups and enterprises are adopting this, adopting, I don't mean just buying, there is momentum here. Irrespective, the analysts talking about this should be, hopefully-- >> Alright so I buy the customer and I've said that on theCube before, of course and Microsoft could say, "We listen to customers too and we have a zillion customers "running Office 365." Is that really cloud or fake cloud? >> At the end of the day, at the end of the day, it's not a winner take all market to one player. I think all of these companies will be successful. They have different strategies. Microsoft's strategy is driven from Office 365 and some of what they can do in Windows into Azure. These folks have come up from the bottom up. Oracle's trying to come at it from a different angle, Google's trying to come at a different angle and the good news is, all of these companies have deep pockets and will invest. Amazon does have a head start. They are number one in the market. >> Let me rephrase it. Modern applications could be, I'll by the customer workload argument if it's defined as a modern app. Because Oracle could say I got a zillion customers too and they win on that, those numbers are pretty strong so is Microsoft. But to me the cloud is showing a new model. >> Absolutely. >> So what is in your mind good metric to saying that's a modern app, that is not. >> I think when you can look at the modern companies like the Airbnb, the Pinterest, the Slacks and whoever. Some of them are going to make a decision to do their own infrastructure. Facebook does not put their IaaS on top of AWS or Azure or Google, they built their own data is because they can afford to do and want to do it. That's their competitive advantage. But for companies who can't, if they are building their apps on these platforms that's one element. And then the traditional enterprises, they think about their evolution. If they're starting to adopt these platforms not just to migrate old applications to new ones where VMware fits in, all building new cloud native applications on there, I think that momentum is clear. When was the last time you saw a company go from zero to 18 billion in 10 years, 10, 12 years that he's been around? Or VMware or Salesforce go from zero to eight billion in the last 18 years? This phenomenon of companies like Salesforce, VMware and AWS-- >> It's all the scale guys, you gotta get to scale, you gotta have value. >> This is unprecedented in the last five to 10 years, unprecedented. These companies I believe are going to be the companies of the tech future. I'm not saying that the old guard, but if they don't change, they won't be the companies that people talk about. The phenomenon of AWS just going from zero to 18 is, I personally think-- >> And growing 40% on that baseline. >> Andy's probably one of the greatest leaders of our modern time for his role in making that happen but I think these are the companies that we watch carefully. The companies that are growing rapidly, that our customers are adopting them in the hundreds of thousands if not millions, there's true momentum there. >> So Sanjay, data has gravity, data is also the new oil. We look at what Andy has in his arsenal, all of the date of that's in S3 that he can run, all his MI and AI services against, that's some great honey for this audience. When I look at VMware, there's not much of a data strategy, there's a security the data in transit but there's not a data strategy. What does VMware's data strategy to help customers take math without oil? >> We've talked about it in terms of our data analytics what we're doing machine learning and AI. We felt this year given so much of what we had to announce around security software-defined networking, the branch, the edge, putting more of that into VMworld which is usually our big event where we announce this stuff would have just crowded our people. But we began to lay the seeds of what you'll start to hear a lot more in 2018. Not trying to make a spoiler alert for but we acquired this company Wavefront that does, next-generation cloud native metrics and analytics. Think of it as like, you did that with AppDynamics in the old world, you're doing this with Wavefront in the new world of cloud native. We have really rethought through how, all the data we collect, whether it's on the data center or in the endpoint could be mined and become a telemetry that we actually use. We bought another company Apteligent, formerly called Criticism, that's allowing us to do that type of analytics on the endpoint. You're gonna see a couple of these moves that are the breadcrumbs of what we'll start announcing a lot more of a comprehensive analytics strategy in 2018, which I think we're very exciting. I think the other thing we've been cautious to do is not AI wash, there's a lot of cloud washing and machine learning washing that happened to companies-- >> They're stopping a wave on-- >> Now it's authentic, now I think it's out there when, when Andy talks about all they're doing in AI and machine learning, there's an authenticity to it. We want to be in the same way, have a measured, careful strategy and you will absolutely hear from us a lot more. Thank you for bringing it up because it's something that's on our radar. >> Sanjay we gotta go but thanks for coming and stopping by theCube. I know you're super busy and great to drop in and see you. >> Always a pleasure and thanks-- >> Congratulations-- >> And Keith good to talk to you again. >> Congratulations, all the success you're having with the show. >> We're doing our work, getting the reports out there, reporting here on theCube, we have two sets, 45,000 people, exclusive coverage on siliconangle.com, more data coming, every day, we have another whole day tomorrow, big night tonight, the Pub Crawl, meetings, VCs, I'll be out there, we'll be out there, grinding it out, ear to the ground, go get those stories and bring it to you. It's theCube live coverage from AWS reInvent 2017, we're back with more after this short break.

Published Date : Nov 30 2017

SUMMARY :

and our ecosystem of partners. and we're excited to have Cube alumni Sanjay Poonen Andy and I both love music. The music thing really speaks to the artists, and continue to do well, of the new VMware and we're very excited about that But the reality was is that you guys have the operators, and marrying the private and public cloud, On top of the term. being a good part of the world's future. I joked that before the start of this that That represents the trust we had with each other. now that you got the private and hybrid going on And the CIO said this to me, the magic word in the Equifax may have been mitigated in helping people on the data center to the endpoint and this acquisition in SD-WAN, the edge, the cloud from the data center to the branch. how does all of that connect together? and bridge into the edge, that connects the data center to the cloud, and in Silicon Angle all the coverage, go check it out at the start of this show that's going to become extinct. hold on, the question is about you know I'm a Gardner-- I know the buyers see if they trust Gartner, maybe not. and I'm not here to defend or criticize Gartner But the number of customers is what counts the most. and I've said that on theCube before, and the good news is, I'll by the customer workload argument So what is in your mind good metric to saying I think when you can look at the modern companies It's all the scale guys, you gotta get to scale, I'm not saying that the old guard, in the hundreds of thousands if not millions, all of the date of that's in S3 that he can run, that are the breadcrumbs of what we'll start announcing and machine learning, there's an authenticity to it. Sanjay we gotta go Congratulations, all the success grinding it out, ear to the ground,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom PettyPERSON

0.99+

Sanjay PoonenPERSON

0.99+

Tom SiebelPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SanjayPERSON

0.99+

AndyPERSON

0.99+

2018DATE

0.99+

MicrosoftORGANIZATION

0.99+

Andy JassyPERSON

0.99+

JohnPERSON

0.99+

AppleORGANIZATION

0.99+

Eric ClaptonPERSON

0.99+

GoogleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

zeroQUANTITY

0.99+

KeithPERSON

0.99+

GartnerORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

10QUANTITY

0.99+

NSXORGANIZATION

0.99+

SiebelPERSON

0.99+

VMwareORGANIZATION

0.99+

Las VegasLOCATION

0.99+

HPORGANIZATION

0.99+

500,000 customersQUANTITY

0.99+

last yearDATE

0.99+

millionsQUANTITY

0.99+

secondQUANTITY

0.99+

40%QUANTITY

0.99+

firstQUANTITY

0.99+

ApteligentORGANIZATION

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

both companiesQUANTITY

0.99+

thirdQUANTITY

0.99+

two factorsQUANTITY

0.99+

7,000 peopleQUANTITY

0.99+

yesterdayDATE

0.99+

Office 365TITLE

0.99+

AT&TORGANIZATION

0.99+

both sidesQUANTITY

0.99+

6000 vendorsQUANTITY

0.99+

AppDynamicsORGANIZATION

0.99+

18 billionQUANTITY

0.99+

Next yearDATE

0.99+

fifth yearQUANTITY

0.99+

one elementQUANTITY

0.99+

Miles Kingston, Intel | AWS re:Invent


 

>> Narrator: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2017 presented by AWS, Intel and our ecosystem of partners. >> Hello and welcome back. Live here is theCUBE's exclusive coverage here in Las Vegas. 45,000 people attending Amazon Web Services' AWS re:Invent 2017. I'm John Furrier with Lisa Martin. Our next guest is Miles Kingston, he is the General Manager of the Smart Home Group at Intel Corporation. Miles, it's great to have you. >> Thank you so much for having me here, I'm really happy to be here. >> Welcome to theCUBE Alumni Club. First time on. All the benefits you get as being an Alumni is to come back again. >> Can't wait, I'll be here next year, for sure. >> Certainly, you are running a new business for Intel, I'd like to get some details on that, because smart homes. We were at the Samsung Developer Conference, we saw smart fridge, smart living room. So we're starting to see this become a reality, for the CES, every 10 years, that's smart living room. So finally, with cloud and all of the computing power, it's arrived or has it? >> I believe we're almost there. I think the technology has finally advanced enough and there is so much data available now that you have this combination of this technology that can analyze all of this data and truly start doing some of the artificial intelligence that will help you make your home smarter. >> And we've certainly seen the growth of Siri with Apple, Alexa for the home with Amazon, just really go crazy. In fact, during the Industry Day, yesterday, you saw the repeat session most attended by developers, was Alexa. So Alexa's got the minds and has captured the imagination of the developers. Where does it go from here and what is the difference between a smart home and a connected home? Can you just take a minute to explain and set the table on that? >> Yeah and I agree, the voice capability in the home, it's absolutely foundational. I think I saw a recent statistic that by 2022, 55% of US households are expected to have a smart speaker type device in their home. So that's a massive percentage. So I think, if you look in the industry, connected home and smart home, they're often use synonymously. We personally look at it as an evolution. And so what I mean by that is, today, we think the home is extremely connected. If I talk about my house, and I'm a total geek about this stuff, I've got 60 devices connected to an access point, I've got another 60 devices connected to an IOT hub. My home does not feel very smart. It's crazy connected, I can turn on lights on and off, sprinklers on and off, it's not yet smart. What we're really focused on at Intel, is accelerating that transition for your home to truly become a smart home and not just a connected home. >> And software is a key part of it, and I've seen developers attack this area very nicely. At the same time, the surface area with these Smart Homes for security issues, hackers. Cause WiFi is, you can run a process on, these are computers. So how does security fit into all of this? >> Yeah, security is huge and so at Intel we're focused on four technology pillars, which we'll get through during this discussion. One of the first ones is connectivity, and we actually have technology that goes into a WiFi access point, the actual silicon. It's optimized for many clients to be in the home, and also, we've partnered with companies, like McAfee, on security software that will sit on top of that. That will actually manage all of the connected devices in your home, as that extra layer of security. So we fundamentally agree that the security is paramount. >> One of the things that I saw on the website that says, Intel is taking a radically different approach based on proactive research into ways to increase smart home adoption. What makes Intel's approach radically different? >> Yeah, so I'm glad that you asked that. We've spent years going into thousands of consumers' homes in North America, Western Europe, China, etc. To truly understand some of the pain points they were experiencing. From that, we basically, gave all this information to our architects and we really synthesized it into what areas we need to advance technology to enable some of these richer use cases. So we're really working on those foundational building blocks and so those four ones I mentioned earlier, connectivity, that one is paramount. You know, if you want to add 35 to 100 devices in your home, you better make sure they're all connected, all the time and that you've got good bandwidth between them. The second technology was voice, and it's not just voice in one place in your home, it's voice throughout your home. You don't want to have to run to the kitchen to turn your bedroom lights on. And then, vision. You know, making sure your home has the ability to see more. It could be cameras, could be motion sensors, it could be vision sensors. And then this last one is this local intelligence. This artificial intelligence. So the unique approach that Intel is taking is across all of our assets. In the data center, in our artificial intelligence organization, in our new technology organization, our IOT organization, in our client computing group. We're taking all of these assets and investing them in those four pillars and kind of really delivering unique solutions, and there's actually a couple of them that have been on display this week so far. >> How about DeepLens? That certainly was an awesome keynote point, and the device that Andy introduced is essentially a wireless device, that is basically that machine learning an AI in it. And that is awesome, because it's also an IOT device, it's got so much versatility to it. What's behind that? Can you give some color to DeepLens? What does it mean for people? >> So, we're really excited about that one. We partnered with Amazon at AWS on that for quite some time. So, just a reminder to everybody, that is the first Deep Learning enabled wireless camera. And what we're helped do in that, is it's got an Intel Atom processor inside that actually runs the vision processing workload. We also contributed a Deep Learning toolkit, kind of a software middleware layer, and we've also got the Intel Compute Library for deep neural networks. So basically, a lot of preconfigured algorithms that developers can use. The bigger thing, though, is when I talked about those four technology pillars; the vision pillar, as well as the artificial intelligence pillar, this is a proof point of exactly that. Running an instance of the AWS service on a local device in the home to do this computer vision. >> When will that device be available? And what's the price point? Can we get our hands on one? And how are people going to be getting this? >> Yeah, so what was announced during the keynote today is that there are actually some Deep Learning workshops today, here at re:Invent where they're actually being given away, and then actually as soon as the announcement was made during the keynote today, they're actually available for pre-order on Amazon.com right now. I'm not actually sure on the shipping date on Amazon, but anybody can go and check. >> Jeff Frick, go get one of those quickly. Order it, put my credit card down. >> Miles: Yes, please do. >> Well, that's super exciting and now, where's the impact in that? Because it seems like it could be a great IOT device. It seems like it would be a fun consumer device. Where do you guys see the use cases for these developing? >> So the reason I'm excited about this one, is I fundamentally believe that vision is going to enable some richer use cases. The only way we're going to get those though, is if you get these brilliant developers getting their hands on the hardware, with someone like Amazon, who's made all of the machine learning, and the cloud and all of the pieces easier. It's now going to make it very easy for thousands, ideally, hundreds of thousands of developers to start working on this, so they can enable these new use cases. >> The pace of innovation that AWS has set, it's palpable here, we hear it, we feel it. This is a relatively new business unit for Intel. You announced this, about a year ago at re:Invent 2016? Are you trying to match the accelerated pace of innovation that AWS has? And what do you see going on in the next 12 months? Where do you think we'll be 12 months from now? >> Yeah, so I think we're definitely trying to be a fantastic technology partner for Amazon. One of the things we have since last re:Invent is we announced we were going to do some reference designs and developer kits to help get Alexa everywhere. So during this trade show, actually, we are holding, I can't remember the exact number, but many workshops, where we are providing the participants with a Speech Enabling Developer toolkit. And basically, what this is, is it's got an Intel platform, with Intel's dual DSP on it, a microarray, and it's paired with Raspberry Pi. So basically, this will allow anybody who already makes a product, it will allow them to easily integrate Alexa into that product with Intel inside. Which is perfect for us. >> So obviously, we're super excited, we love the cloud. I'm kind of a fanboy of the cloud, being a developer in my old days, but the resources that you get out of the cloud are amazing. But now when you start looking at these devices like DeepLens, the possibilities are limitless. So it's really interesting. The question I have for you is, you know, we had Tom Siebel on earlier, pioneer, invented the CRM category. He's now the CEO of C3 IOT, and I asked him, why are you doing a startup, you're a billionaire. You're rich, you don't need to do it. He goes, "I'm a computer guy, I love doing this." He's an entrepreneur at heart. But he said something interesting, he said that the two waves that he surfs, they call him a big time surfer, he's hanging 10 on the waves, is IOT and AI. This is an opportunity for you guys to reimagine the smart home. How important is the IOT trend and the AI trend for really doing it right with smart home, and whatever we're calling it. There's an opportunity there. How are you guys viewing that vision? What progress points have you identified at Intel, to kind of, check? >> Completely agree. For me, AI really is the key turning point here. 'Cause even just talking about connected versus smart, the thing that makes it smart is the ability to learn and think for itself. And the reason we have focused on those technology pillars, is we believe that by adding voice everywhere in the home, and the listening capability, as well as adding the vision capability, you're going to enable all of this rich new data, which you have to have some of these AI tools to make any sense of, and when you get to video, you absolutely have to have some amount of it locally. So, that either for bandwidth reasons, for latency reasons, for privacy reasons, like some of the examples that were given in the keynote today, you just want to keep that stuff locally. >> And having policy and running on it, you know, access points are interesting, it gives you connectivity, but these are computers, so if someone gets malware on the home, they can run a full threaded process on these machines. Sometimes you might not want that. You want to be able to control that. >> Yes, absolutely. We would really believe that the wireless access point in the home is one of the greatest areas where you can add additional security in the home and protect all of the devices. >> So you mentioned, I think 120 different devices in your home that are connected. How far away do you think your home is from being, from going from connected to smart? What's that timeline like? >> You know what I think, honestly, I think a lot of the hardware is already there. And the examples I will give is, and I'm not just saying this because I'm here, but I actually do have 15 Echos in my house because I do want to be able to control all of the infrastructure everywhere in the home. I do believe in the future, those devices will be listening for anomalies, like glass breaking, a dog barking, a baby crying, and I believe the hardware we have today is very capable of doing that. Similarly, I think that a lot of the cameras today are trained to, whenever they see motion, to do certain things and to start recording. I think that use case is going to evolve over time as well, so I truly believe that we are probably two years away from really seeing, with some of the existing infrastructure, truly being able to enable some smarter home use cases. >> The renaissance going on, the creativity is going to be amazing. I'm looking at a tweet that Bert Latimar, from our team made, on our last interview with the Washington County Sheriff, customer of Amazon, pays $6 a month for getting all the mugshots. He goes, "I'm gonna use DeepLens for things like "recognizing scars and tattoos." Because now they have to take pictures when someone comes in as a criminal, but now with DeepLens, they can program it to look for tattoos. >> Yeah, absolutely. And if you see things like the Ring Doorbell today, they have that neighborhood application of it so you can actually share within your local neighborhood if somebody had a package stolen, they can post a picture of that person. And even just security cameras, my house, it feels like Fort Knox sometimes, I've got so many security cameras. It used to be, every time there was a windstorm, I got 25 alerts on my phone, because a branch was blowing. Now I have security cameras that actually can do facial recognition and say, your son is home, your daughter is home, your wife is home. >> So are all the houses going to have a little sign that says,"Protected by Alexa and Intel and DeepLens" >> Don't you dare, exactly. (laughs) >> Lisa: And no sneaking out for the kids. >> Yes, exactly. >> Alright, so real quick to end the segment, quickly summarize and share, what is the Intel relationship with Amazon Web Services? Talk about the partnership. >> It's a great relationship. We've been partnering with Amazon for over a decade, starting with AWS. Over the last couple of years, we've started working closely with them on their first party products. So, many of you have seen the Echo Show and the Echo Look, that has Intel inside. It also has a RealSense Camera in the Look. We've now enabled the Speech Enabling Developer Kit, which is meant to help get Alexa everywhere, running on Intel. We've now done DeepLens, which is a great example of local artificial intelligence. Partnered with all the work we've done with them in the cloud, so it really is, I would say the partnership expands all the way from the very edge device in the home, all the way to the cloud. >> Miles, thanks for coming, Miles Kingston with Intel, General Manager of the Smart Home Group, new business unit at Intel, really reimagining the future for people's lives. I think in this great case where technology can actually help people, rather than making it any more complicated. Which we all know if we have access points and kids gaming, it can be a problem. It's theCUBE, live here in Las Vegas. 45,000 people here at Amazon re:Invent. Five years ago, our first show, only 7,000. Now what amazing growth. Thanks so much for coming out, Lisa Martin and John Furrier here, reporting from theCUBE. More coverage after this short break. (light music)

Published Date : Nov 29 2017

SUMMARY :

and our ecosystem of partners. he is the General Manager of the Smart Home Group I'm really happy to be here. All the benefits you get as being an Alumni for the CES, every 10 years, that's smart living room. that will help you make your home smarter. and has captured the imagination of the developers. Yeah and I agree, the voice capability in the home, At the same time, the surface area with these Smart Homes One of the first ones is connectivity, and we actually One of the things that I saw on the website that says, Yeah, so I'm glad that you asked that. and the device that Andy introduced in the home to do this computer vision. I'm not actually sure on the shipping date on Amazon, Jeff Frick, go get one of those quickly. Where do you guys see the use cases for these developing? and all of the pieces easier. And what do you see going on in the next 12 months? One of the things we have since last re:Invent in my old days, but the resources that you get And the reason we have focused on those technology so if someone gets malware on the home, in the home is one of the greatest areas where you How far away do you think your home is from being, and I believe the hardware we have today is very the creativity is going to be amazing. so you can actually share within your local neighborhood Don't you dare, exactly. Talk about the partnership. and the Echo Look, that has Intel inside. General Manager of the Smart Home Group,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Bert LatimarPERSON

0.99+

Tom SiebelPERSON

0.99+

Jeff FrickPERSON

0.99+

60 devicesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

Miles KingstonPERSON

0.99+

ChinaLOCATION

0.99+

McAfeeORGANIZATION

0.99+

MilesPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Las VegasLOCATION

0.99+

thousandsQUANTITY

0.99+

IntelORGANIZATION

0.99+

SiriTITLE

0.99+

35QUANTITY

0.99+

North AmericaLOCATION

0.99+

yesterdayDATE

0.99+

Western EuropeLOCATION

0.99+

LisaPERSON

0.99+

AppleORGANIZATION

0.99+

two yearsQUANTITY

0.99+

next yearDATE

0.99+

Amazon Web Services'ORGANIZATION

0.99+

AndyPERSON

0.99+

Five years agoDATE

0.99+

first showQUANTITY

0.99+

45,000 peopleQUANTITY

0.99+

CESEVENT

0.99+

todayDATE

0.99+

2022DATE

0.99+

Smart Home GroupORGANIZATION

0.99+

10QUANTITY

0.99+

Amazon.comORGANIZATION

0.98+

OneQUANTITY

0.98+

Echo ShowCOMMERCIAL_ITEM

0.98+

Intel CorporationORGANIZATION

0.98+

120 different devicesQUANTITY

0.98+

100 devicesQUANTITY

0.98+

four onesQUANTITY

0.98+

firstQUANTITY

0.97+

this weekDATE

0.97+

$6 a monthQUANTITY

0.97+

four technology pillarsQUANTITY

0.97+

55%QUANTITY

0.97+

7,000QUANTITY

0.96+

First timeQUANTITY

0.96+

first onesQUANTITY

0.96+

EchosCOMMERCIAL_ITEM

0.96+

AlexaTITLE

0.96+

one placeQUANTITY

0.95+

thousands of consumers'QUANTITY

0.95+

first partyQUANTITY

0.95+

USLOCATION

0.94+

12 monthsQUANTITY

0.94+

Keegan Riley, HPE | VMworld 2017


 

>> Announcer: Live from Las Vegas it's theCUBE covering VMWorld 2017. Brought to you by VMware and its ecosystem partners. >> Okay, welcome back everyone. Live CUBE coverage here at VMWorld 2017. Three days, we're on our third day of VMWorld, always a great tradition, our eighth year. I'm John Furrier with theCUBE co-hosted by Dave Vellante of Wikibon and our next guest is Keegan Riley, vice president and general manager of North American storage at HP Enterprise. Welcome to theCUBE. >> Thank you, thanks for having me. >> Thanks for coming on, love the pin, as always wearin' that with flair. Love the logo, always comment on that when I, first I was skeptical on it, but now I love it, but, HP doing great in storage with acquisitions of SimpliVity and Nimble where you had a good run there. >> Keegan: Absolutely. >> We just had a former HPE entrepreneur now on doing a storage startup, so we're familiar with he HPE storage. Good story. What's the update now, you got Discover in the books, now you got the Madrid coming up event. Software to find storage that pony's going to run for a while. What's the update? >> Yeah, so appreciate the time, appreciate you having me on. You know, the way that we're thinking about HPE's storage it's interesting, it's the company is so different, and mentioned to you guys when we were talking before that I actually left HP to come to Nimble, so in some ways I'm approaching the gold pin for a 10 year anniversary at HP. But the-- >> And they retro that so you get that grand floated in. >> Oh, absolutely, absolutely, vacation time carries over it's beautiful. But the HPE storage that I'm now leading is in some ways very different from the HP storage that I left sic years ago and the vision behind HPE's storage is well aligned with the overall vision of Hewlett-Packard Enterprise, which is we make hybrid IT simple, we power the intelligent edge, and we deliver the services to empower organizations to do this. And the things that we were thinking about at Nimble and the things that we're thinking about as kind of a part of HPE are well aligned with this. So, our belief is everyone at this conference cares about whether it's software defined, whether it's hybrid converge, whether it's all flash so on and so forth, but in the real world what clients tend to care about is kind of their experience and we've seen this really fundamental shift in how consumers think about interacting with IT in general. The example I always give is you know I've been in sales my whole career, I've traveled a lot and historically 15 years ago when I would go to a new city, you know, I would land and I would jump on a airport shuttle to go rent a car and then I would pull out a Thomas Guide and I would go to cell C3 and map out my route to the client and things like that. And so I just expected that if I had a meeting at 2:00 p.m., I needed to land at 10:00 a.m., to make my way to, that was just my experience. Cut to today, you know, I land and I immediately pull out my iPhone and hail an Uber and you know reserve an Airbnb when I get there and I, for a 2:00 p.m. meeting I can land at 1:15 and I know Waze is going to route me around traffic to get there. So, my experience as a consumer has fundamentally changed and that's true of IT organizations and consumers within those organizations. So, IT departments have to adapt to that, right? And so a kind of powering this hybrid IT experience and servicing clients that expect immediacy is what we're all about. >> Okay, so I love that analogy. In fact when we were at HP Discover we kind of had this conversation, so as you hailed that Uber, IT wants self driving storage. >> Keegan: Absolutely. >> So, bring that, tie that back, things that we talk a lot about in kind of a colorful joking way, but that is the automation goal of storage is to be available. We talk about edge, unstructured data, moving compute to the edge, it's nuanced now, storage and compute all this where they go through software. Self driving storage means something, and it's kind of a joke on one hand, but what does it actually mean for an IT guy? >> No, that's a great question and this is exactly the way that we think about it. An the self driving car analogy is a really powerful one, right? And so the way we think about this, we're delivering a predictive cloud platform overall and notice that's not a predictive cloud storage conversation and it's a big part of why it made a ton of sense for Nimble storage to become a part of HPE. We brought to bear a product called InfoSight that you might be familiar with. The idea behind InfoSight is in a cloud connected world the client should never know about what's going on in their infrastructure than we do. So, we view every system as being at the edge of our network and for about seven years now we've been collecting a massive amount of information about infrastructure, about 70 million telemetry points per day per system that's coming back to us. So, we have a massive anonymized dataset about infrastructure. So, we've been collecting all of the sensor data in the same way that say Uber or Tesla has been collecting sensor data from cars, right, and the next step kind of the next wave of innovation, if you will, is, okay it's great that you've collected this sensor data, now what do you do with it? Right? And so we're starting to think about how do you put actuators in place so that you can have an actual self managing data center. How can you apply a machine learning and global kind of corelation in a way that actually applies artificial intelligence to the data center and makes it truly touchless and self managing and self healing and so on and so forth. >> So, that vision alone is when, well, I'm sure when you pitched that to Meg, she was like,"Okay, that sounds good, "let's buy the company." But as well, there was another factor, which was the success that Nimble was having. A major shift in the storage market and you can see it walking around here is that over the last five, seven years there's been a shift from the storage specialist expert at managing LUNs and deploying and tuning, to the sort of generalist because people realize, look, there's no competitive advantage. So, talk about that and how the person to whom you've sold and your career has changed. >> Yeah, no, absolutely, it's a great point. And I think it's in a lot of ways it goes to, you're right, obviously Meg and Antonio saw a lot of value in Nimble Storage. The value that we saw as Nimble Storage is as a standalone storage company with kind of one product to sell. You know there's a saying in sales that if you're a hammer everything looks like a nail, right. And so, it's really cool that we could go get on a whiteboard and explain why the Castle file system is revolutionary and delivers superior IOPs and so on and so forth, but the conversation is shifting to more of a solutions conversation. It moves to how do I deliver actual value and how do I help organizations drive revenue and help them distinguish themselves from their competitors leveraging digital transformation. So, being a part of a company that has a wide portfolio and applying a solutions sales approach it's game changing, right. Our ability to go in and say, "I don't want to tell you about the Nimble OS, "I want to hear from you what your challenges are "and then I'm going to come back to you with a proposal "to help you solve those challenges." It's exciting for our sales teams, frankly, because it changes our conversations that makes us more consultative. >> Alright, talk about the some of the-- >> Value conversations. >> Talk about the sales engagement dynamic with the buyer of storage, especially you mentioned in the old days, now new days. A new dynamic's emerging we've identified on theCUBE past couple days and I'll just kind of lay it out for you and I want you to get a reaction. I'm the storage buyer of old, now I'm the modern guy, I got to know all the ins and outs of speeds and feeds against all the competitors, but now there's a new overlay on top of which is a broader picture across the organization that has compute, that has edge, so I feel more, not deluded from storage, but more holistic around other things, so I have to balance both worlds. I got to balance the, I got to know and nail the storage equation. >> Yeah. >> Okay, at well as know the connection points with how it all works, kind of almost as an OS. How do you engage in that conversation? 'Cause it's hard, right? 'Cause storage you go right into the weeds, speeds and feeds under the hood, see our numbers, we're great, we do all this stuff. But now you got to say wait a minute, but in a VM environment it's this, in a cloud it's like this and there's a little bit of bigger picture, HCI or whatever that is. How do you deal with that? >> No, absolutely, and I think that's well said. I mean, I think the storage market historically has always been sort of, alright, do you want Granny Smith apples or red delicious apples? It always sort of looked the same and it was just about I can deliver x number of IOPs and it became a speeds and feeds conversation. Today, it's not just not apples to apples, it's like you prefer apples, pineapples, or vacuum cleaners. Like, there's so many different ways to solve these challenges and so you have to take the conversation to a higher level, right. It has to be a conversation about how do you deliver value to businesses? And I think, I hear-- >> It gets confusing to the buyers, too, because they're being bombarded with a lot of fudd and they still got to check the boxes on all the under the hood stuff, the engine's got to work. >> And they come to VMWorld and every year there's 92 new companies that haven't heard of before that are pitching them on, hey, I solve your problems. I think what I'm hearing from clients a lot is they don't necessarily want to think about the storage, they don't want to think about do I provision RAID 10 or RAID five and do I manage this aggregate in this way or that way, they don't want to think about, right. So, I think this is why you're seeing the success of these next generation platforms that are radically simple to implement, right, and in some ways at Nimble, wen we were talking to some of these clients to have sort of a legacy approach to storage where you got like a primary LUN administrator, there's nothing wrong with that job, it's a great job and I have friends who do that job, but a lot of companies are now shifting to more of a generalist, I manage applications and I manage you know-- >> John: You manage a dashboard console. >> Exactly, yeah, so you have to make it simple and you have to make it you don't have to think about those things anymore. >> So, in thinking about your relationship over the years with VMware, as HP, you are part of the cartel I call it, the inner circle, you got all the APIs early, all the, you know, the CDKs or SDKs early. You know, you were one of the few. You, of course EMC, NetApp, all the big storage players, couple of IBM, couple others. Okay, and then you go to Nimble, you're a little guy, and it's like c'mon hey let's partner! Okay and so much has changed now that you're back at HPE, how has that, how is it VMware evolved from a ecosystem partner standpoint and then specifically where you are today with HPE? >> That's a great question and I remember the early days at Nimble when you know we were knocking on the door and they were like, "Who are you again? "Nimble who?" And we're really proud of sort of the reputation that we've earned inside of VMware, they're a great partner and they've built such a massive ecosystem, and I mean this show is incredible, right. They're such a core part of our business. At Nimble I feel like we earned sort of a seat at that table in some ways through technology differentiation and just grit and hustle, right. We kind of edged our way into those conversations. >> Dave: Performance. >> And performance. And we started to get interesting to them from a strategic perspective as just Nimble Storage. Now, as a part of HPE, HPE was, and in some ways as a part of HPE you're like, "Oh, that was cute." We thought we were strategic to VMware, now we actually are very strategic to VMware and the things that we're doing with them. From an innovation perspective it's like just throwing fuel on the fire, right. So, we're doubling down on some of the things we're doing around like VM Vision and InfoSight, our partnership with Visa and on ProLiant servers, things like that, it's a great partnership. And I think the things that VMware's announced this week are really exciting. >> Thank you, great to see you, and great to have you on theCUBE. >> Thank you so much. >> I'll give you the last word. What's coming up for you guys and HP storage as the vice president general manager, you're out there pounding the pavement, what should customers look for from you guys? >> No, I appreciate that. There's a couple things. So, first and foremost are R&D budget just got a lot bigger specifically around InfoSight. So, you'll see InfoSight come to other HPE products, 3PAR, ProLiant servers so on and so forth and InfoSight will become a much more interesting cloud based management tool for proactive wellness in the infrastructure. Second, you'll see us double down on our channel, right. So, the channel Nimble's always 100% channel, SimpliVity was 100% channel, HPE Storage is going to get very serious about embracing the channel. And third, we're going to ensure that the client experience remains top notch. The NPS score of 85 that Nimble delivered we're really proud of that and we're going to make sure we don't mess that up for our clients. >> You know it's so funny, just an observation, but I worked at HP for nine years in the late '80s, early '90s and then I watched and been covering theCUBE for over seven years now, storage is always like the power engine of HPE and no matter what's happening it comes back down to storage, I mean, the earnings, the results, the client engagements, storage has moved from this corner kind of function to really strategic. And it continues that way. Congratulations. >> Thank you so much. Appreciate the time. >> Alright, it's theCUBE. Coming up Pat Gelsinger on theCUBE at one o'clock. Stay with us. Got all the great guests and alumni and also executives from VMware coming on theCUBE. I'm John Furrier, Dave Vellante. We'll be right back with more live coverage after this short break.

Published Date : Aug 30 2017

SUMMARY :

Brought to you by VMware and its ecosystem partners. Welcome to theCUBE. of SimpliVity and Nimble where you had a good run there. What's the update now, you got Discover in the books, and mentioned to you guys when we were talking before and the things that we're thinking about as kind of conversation, so as you hailed that Uber, and it's kind of a joke on one hand, actuators in place so that you can have an actual self So, talk about that and how the person to whom you've "and then I'm going to come back to you with a proposal and I want you to get a reaction. 'Cause storage you go right into the weeds, It has to be a conversation about how do you deliver and they still got to check the boxes on all of a legacy approach to storage where you got like and you have to make it you don't have to think Okay, and then you go to Nimble, you're a little guy, and they were like, "Who are you again? and the things that we're doing with them. and great to have you on theCUBE. I'll give you the last word. and we're going to make sure we don't mess that up corner kind of function to really strategic. Thank you so much. and also executives from VMware coming on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

UberORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

John FurrierPERSON

0.99+

IBMORGANIZATION

0.99+

HPEORGANIZATION

0.99+

HPORGANIZATION

0.99+

MegPERSON

0.99+

Pat GelsingerPERSON

0.99+

KeeganPERSON

0.99+

2:00 p.m.DATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Keegan RileyPERSON

0.99+

Nimble StorageORGANIZATION

0.99+

JohnPERSON

0.99+

10:00 a.m.DATE

0.99+

VisaORGANIZATION

0.99+

DavePERSON

0.99+

Las VegasLOCATION

0.99+

92 new companiesQUANTITY

0.99+

100%QUANTITY

0.99+

InfoSightORGANIZATION

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

SecondQUANTITY

0.99+

1:15DATE

0.99+

eighth yearQUANTITY

0.99+

todayDATE

0.99+

nine yearsQUANTITY

0.99+

VMWorld 2017EVENT

0.99+

TodayDATE

0.99+

one o'clockDATE

0.99+

thirdQUANTITY

0.98+

VM VisionORGANIZATION

0.98+

VMWorldORGANIZATION

0.98+

both worldsQUANTITY

0.98+

15 years agoDATE

0.98+

early '90sDATE

0.98+

firstQUANTITY

0.98+

third dayQUANTITY

0.98+

about seven yearsQUANTITY

0.98+

this weekDATE

0.98+

Three daysQUANTITY

0.97+

SimpliVityORGANIZATION

0.97+

late '80sDATE

0.97+

VMWorldEVENT

0.97+

HP DiscoverORGANIZATION

0.97+

85QUANTITY

0.97+

AntonioPERSON

0.96+

over seven yearsQUANTITY

0.96+

10 year anniversaryQUANTITY

0.96+

WikibonORGANIZATION

0.96+

EMCORGANIZATION

0.95+

HP EnterpriseORGANIZATION

0.95+

MadridLOCATION

0.94+

coupleQUANTITY

0.94+

LUNORGANIZATION

0.93+

one productQUANTITY

0.92+

seven yearsQUANTITY

0.92+