Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)
SUMMARY :
This is breaking analysis and did a lot of the charts
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Foyer | PERSON | 0.99+ |
David Moschella | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Patrick Moorhead | PERSON | 0.99+ |
Tom Siebel | PERSON | 0.99+ |
New England | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
CNET | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
MIT Technology Review | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
10% | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
95% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
99% | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
dave.vellante@siliconangle.com | OTHER | 0.99+ |
John Furrier | PERSON | 0.99+ |
EE Times | ORGANIZATION | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Moschella | PERSON | 0.99+ |
theCUBE | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
15 nanometer | QUANTITY | 0.98+ |
2015 | DATE | 0.98+ |
today | DATE | 0.98+ |
Seeing Digital | TITLE | 0.98+ |
30% | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
A14 | COMMERCIAL_ITEM | 0.98+ |
higher than 118% | QUANTITY | 0.98+ |
5% | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
Ein | ORGANIZATION | 0.97+ |
a buck | QUANTITY | 0.97+ |
64 bit | QUANTITY | 0.97+ |
C3.ai | TITLE | 0.97+ |
Databricks | ORGANIZATION | 0.97+ |
about 40% | QUANTITY | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
Dataiku | ORGANIZATION | 0.95+ |
siliconangle.com | OTHER | 0.94+ |
Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1
(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Nishihara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
35 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ant Group | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
32 GPUs | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
September | DATE | 0.99+ |
today | DATE | 0.99+ |
Moore's Law | TITLE | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
PyTorch | TITLE | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
second reason | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
UC Berkeley | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
OpenAI | ORGANIZATION | 0.99+ |
Anyscale | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ByteDance | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Asure | ORGANIZATION | 0.98+ |
one line | QUANTITY | 0.98+ |
one GPU | QUANTITY | 0.98+ |
ChatGPT | TITLE | 0.98+ |
TensorFlow | TITLE | 0.98+ |
last year | DATE | 0.98+ |
first bucket | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two layers | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Alipay | ORGANIZATION | 0.98+ |
Ray | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
Instacart | ORGANIZATION | 0.97+ |
Jas Tremblay, Broadcom
(upbeat music) >> For decades the technology industry had marched the cadence of Moore's law. It was a familiar pattern. System OEMs would design in the next generation of Intel microprocessors, every couple of years or so maybe bump up the memory ranges periodically and the supporting hardware would kind of go along for the ride, upgrading its performance and bandwidth. System designers then they might beef up the cache, maybe throwing some more spinning disc spindles at the equation to create a balanced environment. And this was pretty predictable and consistent in the pattern and was reasonably straightforward compared to is challenges. This has all changed. The confluence of cloud, distributed global networks, the diversity of applications, AI, machine learning and the massive growth of data outside of the data center requires new architectures to keep up. As we've reported the traditional Moore's Law curve is flattening. And along with that we've seen new packages with alternative processors like GPUs, NPUs, accelerators and the like and the rising importance of supporting hardware to offload tasks like storage and security. And it's created a massive challenge to connect all these components together, the storage, the memories and all of the enabling hardware and do so securely at very low latency at scale and of course, cost effectively. This is the topic of today's segment. The shift from a world that is CPU centric to one where the connectivity of the various hardware components is where much of the innovation is occurring. And to talk about that, there is no company who knows more about out this topic than Broadcom. And with us today is Jas Tremblay, who is general manager, data center solutions group at Broadcom. Jas, welcome to theCUBE. >> Hey Dave, thanks for having me, really appreciate it. >> Yeah, you bet. Now Broadcom is a company that a lot of people might not know about. I mean, but the vast majority of the internet traffic flows through Broadcom products. (chuckles) Like pretty much all of it. It's a company with trailing 12 month revenues of nearly 29 billion and a 240 billion market cap. Jas, what else should people know about Broadcom? >> Well, Dave, 99% of the internet traffic goes through Broadcom silicon or devices. And I think what people are not often aware of is how breadth it is. It starts with the devices, phones and tablets that use our Wi-Fi technology or RF filters. And then those connect to access points either at home, at work or public access points using our Wi-Fi technology. And if you're working from home, you're using a residential or broadband gateway and that uses Broadcom technology also. From there you go to access networks, core networks and eventually you'll work your way into the data center, all connected by Broadcom. So really we're at the heart of enabling this connectivity ecosystem and we're at the core of it, we're a technology company. We invest about 5 billion a year in R&D. And as you were saying our last year we achieved 27.5 billion of revenue. And our mission is really to connect the ecosystem to enable what you said, this transformation around the data-centric world. >> So talk about your scope of responsibility. What's your role generally and specifically with storage? >> So I've been with the company for 16 years and I head up the data center solutions group which includes three product franchises PCA fabric, storage connectivity and Broadcom ethernet nics. So my charter, my team's charter is really server connectivity inside the data center. >> And what specifically is Broadcom doing in storage, Jas? >> So it's been quite a journey. Over the past eight years we've made a series of acquisition and built up a pretty impressive storage portfolio. This first started with LSI and that's where I came from. And the team here came from LSI that had two product franchises around storage. The first one was server connectivity, HBA raid, expanders for SSDs and HDDs. The second product group was actually chips that go inside the hard drives. So SOCs and pre amps. So that was an acquisition that we made and actually that's how I came into the Broadcom group through LSI. The next acquisition we made was PLX, the industry's leader in PCIe fabrics. They'd been doing PCIe switches for about 15 years. We acquired the company and really saw an acceleration in the requirements for NVMe attached and AI ML fabrics, very specialized, low latency fabrics. After that, we acquired a large system and software company, Brocade, and Dave if you recall, Brocade they're the market leader in fiber channel switching, this is where if you're financial or government institution you want to build a mission critical, ultra secure really best in class storage network. Following Brocade acquisition we acquired Emulex that is now the number one provider of fiber channel adapters inside servers. And the last acquisition for this puzzle was actually Broadcom where Avago acquired Broadcom and took on the Broadcom name. And there we acquired ethernet switching capabilities and ethernet adapters that go into storage servers or external storage systems. So with all this it's been quite the journey to build up this portfolio. We're number one in each of these storage product categories. And we now have four divisions that are focused on storage connectivity. >> That's quite remarkable when you think about it. I mean, I know all these companies that you were talking about and they were very quality companies but they were kind of bespoke in the fact that you had the vision to kind of connect the dots and now take responsibility for that integration. We're going to talk about what that means in terms of competitive advantage, but I wonder if we could zoom out and maybe you could talk about the key storage challenges and elaborate a little bit on why connectivity is now so important. Like what are the trends that are driving that shift that we talked about earlier from a CPU centric world to one that's connectivity centric? >> I think at Broadcom, we recognize the importance of storage and storage connectivity. And if you look at data centers whether it be private, public cloud or hybrid data centers, they're getting inundated with data. If you look at the digital universe it's growing at about 23% a day. So over a course of four to five years you're doubling the amount of new information and that poses really two key challenges for the infrastructure. The first one is you have to take all this data and for a good chunk of it, you have to store it, be able to access it and protect it. The second challenge is you actually have to go and analyze and process this data and doing this at scale that's the key challenge and what we're seeing these data centers getting a tsunami of data. And historically they've been CPU centric architectures. And what that means is the CPU's at the heart of the data center. And a lot of the workloads are processed by software running on the CPU. We believe that we're currently transforming the architecture from CPU centric to connectivity centric. And what we mean by connectivity centric is you architect your data center thinking about the connectivity first. And the goal of the connectivity is to use all the components inside the data center, the memory, the spinning media, the flash storage, the networking, the specialized accelerators, the FPGA all these elements and use them for what they're best at to process all this data. And the goal Dave is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers. So it's really a shift from CPU centric to bringing in more specialized components and architecting the connectivity inside the data center to help. We think that's a really important part. >> So you have this need for connectivity at scale, you mentioned, and you're dealing with massive, massive amounts of data. I mean, we're going to look back to the last decade and say, oh, you've seen nothing compared to when we get to 2030, but at the same time you have to control costs. So what are the technical challenges to achieving that vision? >> So it's really challenging. It's not that complex to build up faster, bigger solution, if you have no cost or power budget. And really the key challenges that our team is facing working with customers is first, I'd say it's architectural challenges. So we would all like to have one fabric that aim to connect all the devices and bring us all the characteristics that we need. But the reality is, we can't do that. So you need distinct fabrics inside the data center and you need them to work together. You'll need an ethernet backbone. In some cases, you'll need a fiber channel network. In some cases, you'll need a small fabric for thousands or hundreds of thousands of HDDs. You will need PCIe fabrics for AI ML servers. And one of the key architectural challenges is which fabric do you use when and how do you develop these fabrics to meet their purpose built needs. That's one thing. The second architectural challenge, Dave is what I challenge my team with is example, how do I double bandwidth while reducing net power, double bandwidth, reducing net power? How do I take a storage controller and increase the IOPS by 10X and will allocate only 50% more power budget? So that equation requires tremendous innovation. And that's really what we focus on and power is becoming more and more important in that equation. So you've got decisions from an architecture perspective as to which fabric to use. You've got this architectural challenge around we need to innovate and do things smarter, better, to drive down power while delivering more performance. Then if you take those things together the problem statement becomes more complex. So you've had these silicon devices with complex firmware on them that need to inter-operate with multiple devices. They're getting more and more complex. So there's execution challenges and what we need to do. And what we're we're investing to do is shift left quality. So to do these complex devices that they come out time to market with high quality. And one of the key things Dave that we've invested in is emulation of the environment before you tape out your silicon. So effectively taking the application software, running it on an emulation environment, making sure that works, running your tests before you tape out and that ensures quality silicon. So it's challenging, but the team loves challenges. And that's kind of what we're facing, on one hand architectural challenges, on the other hand a new level of execution challenges. >> So you're compressing the time to final tape out versus maybe traditional techniques. And then, you mentioned architecture, am I right Jas that you're essentially from an architectural standpoint trying to minimize the... 'cause your latency's so important you're trying to minimize the amount of data that you have to move around and actually bringing compute to the data. Is that the right way to think about it? >> Well, I think that there's multiple parts of the problem. One of them is you need to do more data transactions, example data protection with rate algorithms. We need to do millions of transactions per second. And the only way to achieve this with the minimal power impact is to hardware accelerate these. That's one piece of investment. The other investment is, you're absolutely right, Dave. So it's shuffling the data around the data center. So in the data center in some cases you need to have multiple pieces of the puzzle, multiple ingredients processing the same data at the same time and you need advanced methodologies to share the data and avoid moving it all over the data center. So that's another big piece of investment that we're focused on. >> So let's stay on that because I see this as disruptive. You talk about spending $5 billion a year in R&D and talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision. >> So let's break it down in a couple big industry problems that our team is focused on. So the first one is I'll take an enterprise workload database. If you want the fastest running database you want to utilize local storage and NVMe based drives and you need to protect that data. And raid is the mechanism of choice to protect your data in local environments. And there what we need to do is really just do the transactions a lot faster. Historically the storage has been a bit of a bottleneck in these types of applications. So example our newest generation product. We're doubling the bandwidth, increasing IOPS by four X, but more importantly we're accelerating raid rebuilds by 50X. And that's an important Dave, if you are using a database in some cases, you limit the size of that database based on how fast you can do those rebuilds. So this 50X acceleration in rebuilds is something we're getting a lot of good feedback on for customers. The last metric we're really focused on is write latency. So how fast can the CPU send the write to the storage connectivity subsystem and committed to drives? And we're improving that by 60X generation over generation. So we're talking fully loaded latency, 10 microseconds. So from an enterprise workload it's about data protection, much, much faster using NVMe drives. That's one big problem. The other one is if you look at Dave YouTube, Facebook, TikTok the amount of user generated content specifically video content that they're producing on an hour by hour basis is mind boggling. And the hyperscale customers are really counting on us to help them scale the connectivity of hundreds of thousands of hard drive to store and access all that data in a very reliable way. So there we're leading the industry in the transition to 24 gig SaaS and multi actuator drives. Third big problem is around AI ML servers. So these are some of the highest performance servers, that they basically need super low latency connectivity between GPGPUs, networking, NVMe drives, CPUs and orchestrate that all together. And the fabric of choice for that is PCIe fabric. So here, we're talking about 115 nanosecond latency in a PCIe fabric, fully nonblocking, very reliable. And here we're helping the industry transition from PCA gen four to PCIe gen five. And the last piece is okay, I've got a AI ML server, I have a storage system with hard drives or a storage server in the enterprise space. All these devices, systems need to be connected to the ethernet backbone. And my team is heavily investing in ethernet mix transitioning to 100 gig, 200 gig, 400 gig and putting capabilities optimized for storage workloads. So those are kind of the four big things that we're focused on at the industry level, from a connectivity perspective, Dave. >> And that makes a lot of sense and really resonates particularly as we have that shift from a CPU centric to a connectivity centric. And the other thing you said, I mean, you're talking about 50X rate rebuild times, a couple of things you know in storage is if you ask the question, what happens when something goes wrong? 'Cause it's all about recovery, you can't lose data. And the other thing you mentioned is write latency, which has always been the problem. Okay, reads, I can read out cache but ultimately you've got to get it to where it's persisted. So some real technical challenges there that you guys are dealing with. >> Absolutely, Dave. And these are the type of problems that gets the engineers excited. Give them really tough technical problems to go solve. >> I wonder if we could take a couple of examples or an example of scaling with a large customer, for instance obviously hyperscalers or take a company like Dell. I mean they're big company, big customer. Take us through that. >> So we use the word scale a lot at Broadcom. We work with some of the industry leaders and data centers and OEMs and scale means different things to them. So example, if I'm working with a hyperscaler that is getting inundated with data and they need half a million storage controllers to store all that data, well their scale problem is, can you deliver? And Dave, you know how much of a hot topic that is these days. So they need a partner that can scale from a delivery perspective. But if I take a company like example Dell that's very focused on storage, from storage servers, their acquisition of EMC. They have a very broad portfolio of data center storage offerings and scale to them from a connected by Broadcom perspective means that you need to have the investment scale to meet their end to end requirements. All the way from a low end storage connectivity solution for booting a server all the way up to a very high end all flash array or high density HDD system. So they want a company a partner that can invest and has a scale to invest to meet their end to end requirements. Second thing is their different products are unique and have different requirements and you need to adapt your collaboration model. So example, some products within Dell portfolio might say, I just want a storage adaptor, plug it in, the operating system will automatically recognize it. I need this turnkey. I want to do minimal investment, is not an area of high differentiation for me. At the other end of the spectrum they may have applications where they want deep integration with their management and our silicon tools so that they can deliver the highest quality, highest performance to their customers. So they need a partner that can scale from an R&D investment perspective from silicon software and hardware perspective but they also need a company that can scale from support and business model perspective and give them the flexibility that their end customers need. So Dell is a great company to work with. We have a long lasting relationship with them and the relationship is very deep in some areas, example server storage, and is also quite broad. They are adopters of the vast majority of our storage connectivity products. >> Well, and I imagine it was. Well I want to talk about the uniqueness of Broadcom again, I'm in awe of the fact that somebody had the vision, you guys, your team obviously your CEO was one of the visionaries of the industry, had the sense to look out and say, okay, we can put these pieces together. So I would imagine a company like Dell, they're able to consolidate their vendor their supplier base and push you for integration and innovation. How unique is the Broadcom model? What's compelling to your customer about that model? >> So I think what's unique from a storage perspective is the breadth of the portfolio and also the scale at which we can invest. So if you look at some of the things we talked about from a scale perspective how data centers throughout the world are getting inundated with data, Dave, they need help. And we need to equip them with cutting edge technology to increase performance, drive down power, improve reliability. So they need partners that in each of the product categories that you partner with them on, we can invest with scale. So that's, I think one of the first things. The second thing is, if you look at this connectivity centric data center you need multiple types of fabric. And whether it be cloud customers or large OEMs they are organizing themselves to be able to look at things holistically. They're no longer product company, they're very data center architecture companies. And so it's good for them to have a partner that can look across product groups across divisions says, okay this is the innovation we need to bring to market. These are the problems we need to go solve and they really appreciate that. And I think the last thing is a flexible business model. Within example, my division, we offer different business models, different engagement and collaboration models with technology. But there's another division that if you want to innovate at the silicon level and build custom silicon for you like many of the hyperscalers or other companies are doing that division is just focus on that. So I feel like Broadcom is unique from a storage perspective it's ability to innovate, breadth of portfolio and the flexibility in the collaboration model to help our customers solve their customers problems. >> So you're saying you can deal with merchant products slash open products or you can do high customization. Where does software differentiation fit into this model? >> So it's actually one of the most important elements. I think a lot of our customers take it for granted that will take care of the silicon will anticipate the requirements and deliver the performance that they need, but from a software, firmware, driver, utilities that is where a lot of differentiation lies. Some cases we'll offer an SDK model where customers can build their entire applications on top of that. In some cases they want to complete turnkey solution where you take technology, integrate it into server and the operating system recognizes it and you have outer box drivers from Broadcom. So we need to offer them that flexibility because their needs are quite broad there. >> So last question, what's the future of the business look like to Jas Tremblay? Give us your point of view on that. >> Well, it's fun. I got to tell you, Dave, we're having a great time. I've got a great team, they're the world's experts on storage connectivity and working with them is a pleasure. And we've got a rich, great set of customers that are giving us cool problems to go solve and we're excited about it. So I think this is really, with the acceleration of all this digital transformation that we're seeing, we're excited, we're having fun. And I think there's a lot of problems to be solved. And we also have a responsibility. I think the ecosystem and the industry is counting on our team to deliver the innovation from a storage connectivity perspective. And I'll tell you, Dave, we're having fun. It's great but we take that responsibility pretty seriously. >> Jas, great stuff. I really appreciate you laying all that out. Very important role you guys are playing. You have a really unique perspective. Thank you. >> Thank you, Dave. >> And thank you for watching. This is Dave Vellante for theCUBE and we'll see you next time.
SUMMARY :
and all of the enabling hardware me, really appreciate it. of the internet traffic flows Well, Dave, 99% of the internet traffic and specifically with storage? inside the data center. And the last acquisition for this puzzle kind of connect the dots And a lot of the workloads are processed but at the same time you And one of the key things Dave the time to final tape out So in the data center or the supportive technologies So how fast can the CPU send the write And the other thing you said, that gets the engineers excited. or an example of scaling with and the relationship is that somebody had the vision, and also the scale at which we can invest. So you're saying you can and the operating system recognizes it look like to Jas Tremblay? of problems to be solved. I really appreciate you and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
27.5 billion | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Brocade | ORGANIZATION | 0.99+ |
16 years | QUANTITY | 0.99+ |
100 gig | QUANTITY | 0.99+ |
12 month | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
24 gig | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Emulex | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
Jas Tremblay | PERSON | 0.99+ |
60X | QUANTITY | 0.99+ |
99% | QUANTITY | 0.99+ |
240 billion | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
50X | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
Avago | ORGANIZATION | 0.99+ |
10 microseconds | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
LSI | ORGANIZATION | 0.99+ |
Jas | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
PLX | ORGANIZATION | 0.99+ |
two key challenges | QUANTITY | 0.98+ |
last year | DATE | 0.98+ |
10X | QUANTITY | 0.98+ |
2030 | DATE | 0.98+ |
about 15 years | QUANTITY | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
hundreds of thousands | QUANTITY | 0.98+ |
nearly 29 billion | QUANTITY | 0.97+ |
about 23% a day | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
TikTok | ORGANIZATION | 0.97+ |
$5 billion a year | QUANTITY | 0.96+ |
Moore's | TITLE | 0.95+ |
first things | QUANTITY | 0.94+ |
Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto in Boston, and Intel is the steward of Moore's Law.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie Chan | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
Ohio | LOCATION | 0.99+ |
February 18th | DATE | 0.99+ |
Mobileye | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
$100 million | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Arizona | LOCATION | 0.99+ |
Wright | PERSON | 0.99+ |
18 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
6x | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
2x | QUANTITY | 0.99+ |
$50 billion | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
$1.5 billion | QUANTITY | 0.99+ |
2030s | DATE | 0.99+ |
2030 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
last year | DATE | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
2020s | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two-year | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
two chips | QUANTITY | 0.99+ |
15 billion | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Tower Semiconductor | ORGANIZATION | 0.99+ |
M1 Ultra | COMMERCIAL_ITEM | 0.99+ |
2024 | DATE | 0.99+ |
70 plus billion dollar | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
A year ago | DATE | 0.99+ |
200 billion chips | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
three years | QUANTITY | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
second year | QUANTITY | 0.99+ |
about $1.4 billion | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
Breaking Analysis Rethinking Data Protection in the 2020s
>> From theCUBE studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Techniques to protect sensitive data have evolved over thousands of years literally. The pace of modern data protection is rapidly accelerating and presents both opportunities and threats for organizations. In particular, the amount of data stored in the cloud combined with hybrid work models, the clear and present threat of cyber crime, regulatory edicts and the ever expanding edge and associated use cases should put CXOs on notice that the time is now to rethink your data protection strategies. Hello, and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this Breaking Analysis, we're going to explore the evolving world of data protection and share some information on how we see the market changing in the competitive landscape for some of the top players. Steve Kenniston AKA the Storage Alchemist shared a story with me and it was pretty clever. Way back in 4,000 BC the Sumerians invented the first system of writing. Now they used clay tokens to represent transactions at that time. Now, to prevent messing with these tokens, they sealed them in clay jars to ensure that the tokens or either data would remain secure with an accurate record, let's call it quasi immutable and lived in a clay vault. Since that time, we've seen quite an evolution in data protection. Tape, of course, was the main means of protecting data, backing data up during most of the mainframe era and that carried into client server computing, which really accentuated and underscored the issues around backup windows and challenges with RTO, Recovery Time Objective and RPO, Recovery Point Objective, and just overall recovery nightmares. Then in the 2000s data reduction made displace backup more popular and push tape into an archive last resort media data domain then EMC now Dell still sell many purpose built backup appliances as do others as a primary backup target disc base. The rise of virtualization brought more changes in backup and recovery strategies as a reduction in physical resources squeezed the one application that wasn't under utilizing compute i.e backup. And we saw the rise of Veeam, the cleverly named company that became synonymous with data protection for virtual machines. Now the cloud has created new challenges related to data sovereignty, governance latency, copy creep, expense, et cetera but more recently cyber threats have elevated data protection to become a critical adjacency to information security. Cyber resilience to specifically protect against ransomware attacks as the new trend being pushed by the vendor community as organizations are urgently looking for help with this insidious threat. Okay, so there are two major disruptors that we're going to talk about today, the cloud and cyber crime, especially around ransoming your data. Every customer is using the cloud in some way, shape or form. Around 76% are using multiple clouds that's according to a recent study by HashiCorp. We've talked extensively about skill shortages on theCUBE and data protection and security concerns are really key challenges to address given that skill shortage is a real talent gap in terms of being able to throw people at solving this problem. So what customers are doing they're either building out or they're buying, really mostly building abstraction layers to hide the underlying cloud complexity. So, what this does, the good news is it simplifies provisioning and management but it creates problems around opacity. In other words, you can't see sometimes what's going on with the data, these challenges fundamentally become data problems in our view. Things like fast, accurate, and complete backup recovery, compliance, data sovereignty, data sharing, I mentioned copy creep, cyber resiliency, privacy protections these are all challenges brought to fore by the cloud, the advantages, the pros and the cons. Now, remote workers are especially vulnerable and as clouds expand rapidly data protection technologies are struggling to keep pace. So let's talk briefly about the rapidly expanding public cloud. This chart shows worldwide revenue for the big four hyperscalers, as you can see we projected they're going to surpass $115 billion in revenue in 2021, that's up from 86 billion last year. So it's a huge market, it's growing in the 35% range. The interesting thing is last year, 80 plus billion dollars in revenue but a 100 billion dollars was spent last year by these firms in CapEx. So they're building out infrastructure for the industry. This is a gift to the balance of the industry. Now to date legacy vendors and their surrounding community have been pretty defensive around the cloud, "Oh, not everything is going to move to the cloud, it's not a zero sum game we here." And while that's all true the narrative was really kind of a defense posture and that's starting to change as large tech companies like Dell, IBM, Cisco, HPE, and others see opportunities to build on top of this infrastructure. You certainly see that with Arvind Krishna's comments at IBM, Cisco obviously leaning in from a networking and security perspective. HPE using language that is very much cloud-like with its GreenLake strategy. And of course, Dell is all over this. Let's listen to how Michael Dell is thinking about this opportunity when he was questioned on theCUBE by John Furrier about the cloud. Play the clip. >> Well, clouds are infrastructure, right? So you can have a public cloud, you can have an edge cloud, a private cloud, a Telco cloud, a hybrid cloud, multicloud, here cloud, there cloud, everywhere cloud, cloud. Yet, they'll all be there, but it's basically infrastructure. And how do you make that as easy to consume and create the flexibility that enables everything. >> Okay, so in my view, Michael nailed it, the cloud is everywhere. You have to make it easy and you have to admire the scope of his comments. We know this guy, he thinks big, right? He said enables everything. What he's basically saying is that, technology is at the point where it has the potential to touch virtually every industry, every person, every problem, everything. So let's talk about how this informs the changing world of data protection. Now, we've seen with the pandemic there's an acceleration toward digital and that has caused an escalation if you will, in the data protection mandate. So essentially what we're talking about here is the application of Michael Dell's cloud everywhere comments. You've got on-prem, private clouds, hybrid clouds, you've got public clouds across AWS, Azure, Google, Alibaba, really those big four hyperscalers. You got many clouds that are popping up all over the place, but multicloud to that HashiCorp data point, 75, 76%, and then you now see the cloud expanding out to the edge, programmable infrastructure heading out to the edge. So the opportunity here to build the data protection cloud is to have the same experiences across all these estates with automation and orchestration in that cloud, that data protection cloud if you will. So think of it as an abstraction layer that hides that underlying complexity, you log into that data protection cloud it's the same experience. So you've got backup, you've got recovery, you can handle bare-metal, you can do virtualized backups and recoveries, any cloud, any OS, out to the edge, Kubernetes and container use cases, which is an emerging data protection requirement and you've got analytics, perhaps you've got PII, Personally Identifiable Information protection in there. So the attributes of this data protection cloud, again, it abstracts the underlying cloud primitives, takes care of that. It also explodes cloud native technologies. In other words, it takes advantage of whether it's machine learning, which all the big cloud players have expertise in, new processor models things like Graviton and other services that are in the cloud natively. It doesn't just wrap it's on-prem stack in a container and shove it into the cloud, no, it actually re architects or architects around those cloud native services and it's got distributed metadata to track files and volumes and any organizational data irrespective of location. And it enables sets of services to intelligently govern in a federated governance manner while ensuring data integrity and all this is automated and orchestrated to help with the skills gap. Now, as it relates to cyber recovery, air gap solutions must be part of the portfolio, but managed outside of that data protection cloud that we just briefly described. The orchestration and the management must also be gapped if you will, otherwise, you don't have an air gap. So all of this is really a cohort to cyber security or your cybersecurity strategy and posture, but you have to be careful here because your data protection strategy could get lost in this mess. So you want to think about the data protection cloud as again, an adjacency or maybe an overlay to your cybersecurity approach, not a bolt on it's got to be fundamentally architectured from the bottom up. And yes, this is going to maybe create some overheads and some integration challenges but this is the way in which we think you should think about it. So you'll likely need a partner to do this, again, we come back to the skills gap if were seeing the rise of MSPs, managed service providers and specialist service providers, not public cloud providers, people are concerned about lock-in and that's really not their role. They're not high touch services company, probably not your technology arms dealer, excuse me, they're selling technology to these MSPs. So the MSPs, they have intimate relationships with their customers. They understand their business and specialize in architecting solutions to handle these difficult challenges. So let's take a look at some of the risk factors here and dig a little bit into the cyber threat that organizations face. This is a slide that, again, the Storage Alchemists, Steve Kenniston shared with me, it's based on a study that IBM funds with the Panama Institute, which is a firm that studies these things like cost of breaches and has for many, many, many years. The slide shows the total cost of a typical breach within each dot and on the Y-axis and the frequency in percentage terms on the horizontal axis. Now it's interesting, the top two are compromised credentials and fishing, which once again proves that bad user behavior trumps good security every time. But the point here is that the adversary's attack vectors are many and specific companies often specialize in solving these problems often with point products, which is why the slide that we showed from Optiv earlier, that messy slide looks so cluttered. So it's a huge challenge for companies, and that's why we've seen the emergence of cyber recovery solutions from virtually all the major players. Ransomware and the SolarWinds hack have made trust the number one issue for CEOs and CSOs and boards of directors, shifting CSO spending patterns are clear. Shifting largely because they're catalyzed by the work from home. But outside of the moat to endpoint security identity and access management, cloud security, the horizontal network security. So security priorities and spending are changing that's why you see the emergence of disruptors like we've covered extensively, Okta, Crowdstrike, Zscaler. And cyber resilience is top of mind and robust solutions are required and that's why companies are building cyber recovery solutions that are most often focused on the backup corpus because that's a target for the bad guys. So there is an opportunity, however to expand from just the backup corpus to all data and protect this kind of 3-2-1, or maybe it's 3-2-1-1, three copies, two backups, a backup in the cloud and one that's air gapped. So this can be extended to primary storage, copies, snaps, containers, data in motion, et cetera, to have a comprehensive data protection strategy. Customers as I said earlier, increasingly looking to manage service providers and specialists because of that skills gap and that's a big reason why automation is so important in orchestration. And automation and orchestration I'll emphasize on the air gap solutions should be separated physically and logically. All right, now let's take a look at some of the ETR data and some of the players. This is a chart that we like to show often, it's a X, Y axis, and the Y-axis is net score, which is a measure of spending momentum and the horizontal axis is market share. Now market share is an indicator of pervasiveness in the survey. It's not spending market share, it's not market share of the overall market, it's a term that ETR uses. It's essentially market share of the responses within the survey set, think of it as mind share. Okay, you've got the pure plays here on this slide in the storage category, there is no data protection or backup category so what we've done is we've isolated the pure plays or close to pure plays in backup and data protection. Notice that red line, that red line is kind of our subjective view of anything that's over that 40% line is elevated, you can see only rubric in the July survey is over that 40% line. I'll show you the ends in a moment. Smaller ends, but still rubric is the only one. Now look at Cohesity and rubric in the January, 2020. So last year pre-pandemic Cohesity and Rubrik they've come well off their peaks for net score. Look at Veeam, Veeam having studied this data for the last say 24 plus months, Veeam has been Steady Eddie. It is really always in the mid to high 30s, always shows a large shared end so it's coming up in the survey, customers are mentioning Veeam and it's got a very solid net score. It's not above that 40% line but it's hovering just below consistently, that's very impressive. Commvault has steadily been moving up. Sanjay Mirchandani has made some acquisitions, he did the Hedvig acquisition. They launched metallic that's driving cloud affinity within a Commvault large customer base so it's a good example of a legacy player, pivoting and evolving and transforming itself. Veritas continues to underperform in the ETR surveys relative to the other players. Now, for context, let's say add IBM and Dell to the chart. Now just note, this is IBM and Dell's full storage portfolio. The category in the taxonomy at ETR is all storage. Okay, this previous slide I isolated on the pure plays, but this now adds in IBM and Dell. It probably representative of where they would be, probably Dell larger on the horizontal axis than IBM, of course and you could see the spending momentum in accordingly. So you could see that in the data chart that we've inserted. So smaller ends for Rubrik and Cohesity, but still enough to pay attention, it's not like one or two when you're 20 plus, 15 plus, 25 plus you can start to pay attention to trends. Veeam again is very impressive. Its net score is solid, it's got a consistent presence in the dataset, it's clear leader here. SimpliVity is small but it's improving relative to last several surveys and we talked about Commvault. Now, I want to emphasize something that we've been hitting on for quite some time now and that's the renaissance that's coming in compute. Now we all know about Moore's law, the doubling of transistor density every two years, 18 to 24 months and that leads to a doubling of performance in that time frame. X86, that X86 curve is in the blue and if you do the math, this is expressed in trillions of operations per second. The orange line is a representative of Apple's A series culminating in the A-15 most recently, the A series is what Apple is now... It's the technology basis for what's inside, and one the new Apple laptops, which is replacing Intel. That's that orange line there we'll come back to that. So go back to the blue line for a minute. If you do the math on doubling performance every 24 months, it comes out to roughly 40% annual improvement in processing power per year. That's now moderated. So Moore's law is waning in one sense so we wrote a piece Moore's law is not dead so I'm sort of contradicting myself there, but the traditional Moore's law curve on X86 is waning. It's probably now down to around 30%, low 30s, but look at the orange line. Again, using the A series as an indicator, if you combine the CPU, the NPU, which is the neural processing unit, XPU, pick whatever PU you want, the accelerators, the DSPs, that line is growing at a 100% plus per year. It's probably more accurately around 110% a year. So there's a new industry curve occurring and it's being led by the Arm ecosystem. The other key factor there you see in a lot of use cases, a lot of consumer use cases Apple is an example but you're also seeing it in things like Tesla, Amazon with AWS Graviton, the Annapurna acquisition, building out Graviton and Nitro that's based on Arm. You can get from design to tape out in less than two years Whereas the Intel cycles we know they've been running it four to five years now, maybe Pat Gelsinger is compressing those, but Intel is behind. So, organizations that are on that orange curve are going to see faster acceleration, lower cost, lower power, et cetera. All right, so what's the tie to data protection? I'm going to leave you with this chart. Arm has introduced it's confidential compute architecture, and is ushering in a new era of security and data protection. Zero Trust is the new mandate and what Arm has done with what they call realms is create physical separation of the vulnerable components by creating essentially physical buckets to put code in and to put data in separate from the OS. Remember the OS is the most valuable entry point for hackers or one of them because it contains privileged access and it's a weak link because of things like memory leakages and vulnerabilities. And malicious code can be placed by bad guys within data in the OS and appear benign even though it's anything but. So in this architecture, all the OS does is create API calls to the realm controller. That's the only interaction. So it makes it much harder for bad actors to get access to the code and the data. And importantly, very importantly, it's an end-to-end architecture so there's protection throughout if you're pulling data from the edge and bringing it back to on-prem and the cloud you've got that end-to-end architecture and protection throughout. So the link to data protection is that backup software vendors need to be the most trusted of applications. Backup software needs to be the most trusted of applications because it's one of the most targeted areas in the cyber attack. Realms provide an end-to-end separation of data and code from the OS and is a better architectural construct to support Zero Trust and confidential computing and critical use cases like data protection/backup and other digital business apps. So our call to action is backup software vendors you can lead the charge. Arm is several years ahead at the moment, head of Intel in our view. So you got to pay attention to that, research that, we're not saying over rotate, but go investigate that. And use your relationships with Intel to accelerate its version of this architecture or ideally the industry should agree on common standards and solve this problem together. Pat Gelsinger told us in theCUBE that if it's the last thing he's going to do in his industry life he's going to solve this security problem. That's when he was at VMware. Well, Pat you're even in a better place to do it now, you don't have to solve it yourself, you can't and you know that. So while you're going about your business saving Intel, look to partner with Arm I know it sounds crazy to use these published APIs and push to collaborate on an open source architecture that addresses the cyber problem. If anyone can do it, you can. Okay, that's it for today. Remember, these episodes are all available as podcasts all you got to do is search Breaking Analysis podcast, I publish weekly on Wikibon.com and SiliconANGLE.com. Or you can reach me at dvellante on Twitter, email me at Dave.Vellante@SiliconANGLE.com. And don't forget to check out ETR.plus for all the survey and data action. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching everybody, be well and we'll see you next time. (upbeat music)
SUMMARY :
bringing you data-driven that the time is now to rethink and create the flexibility So the link to data protection is that
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Steve Kenniston | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
January, 2020 | DATE | 0.99+ |
Panama Institute | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Sanjay Mirchandani | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
18 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
July | DATE | 0.99+ |
last year | DATE | 0.99+ |
$115 billion | QUANTITY | 0.99+ |
100 billion dollars | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
75 | QUANTITY | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
less than two years | QUANTITY | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
15 plus | QUANTITY | 0.99+ |
25 plus | QUANTITY | 0.99+ |
80 plus billion | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Hedvig | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
two backups | QUANTITY | 0.99+ |
2000s | DATE | 0.99+ |
A-15 | COMMERCIAL_ITEM | 0.99+ |
24 months | QUANTITY | 0.99+ |
A series | COMMERCIAL_ITEM | 0.99+ |
Dave.Vellante@SiliconANGLE.com | OTHER | 0.98+ |
20 plus | QUANTITY | 0.98+ |
Arm | ORGANIZATION | 0.98+ |
40% | QUANTITY | 0.98+ |
86 billion | QUANTITY | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
one application | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Moore | PERSON | 0.97+ |
24 plus months | QUANTITY | 0.97+ |
first system | QUANTITY | 0.97+ |
Veeam | ORGANIZATION | 0.97+ |
2020s | DATE | 0.97+ |
Optiv | ORGANIZATION | 0.96+ |
three copies | QUANTITY | 0.96+ |
76% | QUANTITY | 0.96+ |
VMware | ORGANIZATION | 0.96+ |
around 30% | QUANTITY | 0.96+ |
around 110% a year | QUANTITY | 0.95+ |
Breaking Analysis: Rethinking Data Protection in the 2020s
>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is braking analysis with Dave Vellante. >> Techniques to protect sensitive data have evolved over thousands of years, literally. The pace of modern data protection is rapidly accelerating and presents both opportunities and threats for organizations. In particular, the amount of data stored in the cloud combined with hybrid work models, the clear and present threat of cyber crime, regulatory edicts, and the ever expanding edge and associated use cases should put CXOs on notice that the time is now to rethink your data protection strategies. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to explore the evolving world of data protection and share some information on how we see the market changing in the competitive landscape for some of the top players. Steve Kenniston, AKA the Storage Alchemist, shared a story with me, and it was pretty clever. Way back in 4000 BC, the Sumerians invented the first system of writing. Now, they used clay tokens to represent transactions at that time. Now, to prevent messing with these tokens, they sealed them in clay jars to ensure that the tokens, i.e the data, would remain secure with an accurate record that was, let's call it quasi, immutable, and lived in a clay vault. And since that time, we've seen quite an evolution of data protection. Tape, of course, was the main means of protecting data and backing data up during most of the mainframe era. And that carried into client server computing, which really accentuated and underscored the issues around backup windows and challenges with RTO, recovery time objective and RPO recovery point objective. And just overall recovery nightmares. Then in the 2000's data reduction made disk-based backup more popular and pushed tape into an archive last resort media. Data Domain, then EMC, now Dell still sell many purpose-built backup appliances as do others as a primary backup target disc-based. The rise of virtualization brought more changes in backup and recovery strategies, as a reduction in physical resources squeezed the one application that wasn't under utilizing compute, i.e, backup. And we saw the rise of Veem, the cleverly-named company that became synonymous with data protection for virtual machines. Now, the cloud has created new challenges related to data sovereignty, governance, latency, copy creep, expense, et cetera. But more recently, cyber threats have elevated data protection to become a critical adjacency to information security. Cyber resilience to specifically protect against attacks is the new trend being pushed by the vendor community as organizations are urgently looking for help with this insidious threat. Okay, so there are two major disruptors that we're going to talk about today, the cloud and cyber crime, especially around ransoming your data. Every customer is using the cloud in some way, shape, or form. Around 76% are using multiple clouds, that's according to a recent study by Hashi Corp. We've talked extensively about skill shortages on theCUBE, and data protection and security concerns are really key challenges to address, given that skill shortage is a real talent gap in terms of being able to throw people at solving this problem. So what customers are doing, they're either building out or they're buying really mostly building abstraction layers to hide the underlying cloud complexity. So what this does... The good news is it's simplifies provisioning and management, but it creates problems around opacity. In other words, you can't see sometimes what's going on with the data. These challenges fundamentally become data problems, in our view. Things like fast, accurate, and complete backup recovery, compliance, data sovereignty, data sharing. I mentioned copy creep, cyber resiliency, privacy protections. These are all challenges brought to fore by the cloud, the advantages, the pros, and the cons. Now, remote workers are especially vulnerable. And as clouds span rapidly, data protection technologies are struggling to keep pace. So let's talk briefly about the rapidly-expanding public cloud. This chart shows worldwide revenue for the big four hyperscalers. As you can see, we projected that they're going to surpass $115 billion in revenue in 2021. That's up from 86 billion last year. So it's a huge market, it's growing in the 35% range. The interesting thing is last year, 80-plus billion dollars in revenue, but 100 billion dollars was spent last year by these firms in cap ex. So they're building out infrastructure for the industry. This is a gift to the balance of the industry. Now to date, legacy vendors and the surrounding community have been pretty defensive around the cloud. Oh, not everything's going to move to the cloud. It's not a zero sum game we hear. And while that's all true, the narrative was really kind of a defensive posture, and that's starting to change as large tech companies like Dell, IBM, Cisco, HPE, and others see opportunities to build on top of this infrastructure. You certainly see that with Arvind Krishna comments at IBM, Cisco obviously leaning in from a networking and security perspective, HPE using language that is very much cloud-like with its GreenLake strategy. And of course, Dell is all over this. Let's listen to how Michael Dell is thinking about this opportunity when he was questioned on the queue by John Furrier about the cloud. Play the clip. So in my view, Michael nailed it. The cloud is everywhere. You have to make it easy. And you have to admire the scope of his comments. We know this guy, he thinks big. He said, "Enables everything." He's basically saying is that technology is at the point where it has the potential to touch virtually every industry, every person, every problem, everything. So let's talk about how this informs the changing world of data protection. Now, we all know, we've seen with the pandemic, there's an acceleration in toward digital, and that has caused an escalation, if you will, in the data protection mandate. So essentially what we're talking about here is the application of Michael Dell's cloud everywhere comments. You've got on-prem, private clouds, hybrid clouds. You've got public clouds across AWS, Azure, Google, Alibaba. Really those are the big four hyperscalers. You got many clouds that are popping up all their place. But multi-cloud, to that Hashi Corp data point, 75, 70 6%. And then you now see the cloud expanding out to the edge, programmable infrastructure heading out to the edge. So the opportunity here to build the data protection cloud is to have the same experiences across all these estates with automation and orchestration in that cloud, that data protection cloud, if you will. So think of it as an abstraction layer that hides that underlying complexity, you log into that data protection cloud, it's the same experience. So you've got backup, you've got recovery, you can handle bare metal. You can do virtualized backups and recoveries, any cloud, any OS, out to the edge, Kubernetes and container use cases, which is an emerging data protection requirement. And you've got analytics, perhaps you've got PII, personally identifiable information protection in there. So the attributes of this data protection cloud, again, abstracts the underlying cloud primitives, takes care of that. It also explodes cloud native technologies. In other words, it takes advantage of whether it's machine learning, which all the big cloud players have expertise in, new processor models, things like graviton, and other services that are in the cloud natively. It doesn't just wrap it's on-prem stack in a container and shove it into the cloud, no. It actually re architects or architects around those cloud native services. And it's got distributed metadata to track files and volumes and any organizational data irrespective of location. And it enables sets of services to intelligently govern in a federated governance manner while ensuring data integrity. And all this is automated and an orchestrated to help with the skills gap. Now, as it relates to cyber recovery, air-gap solutions must be part of the portfolio, but managed outside of that data protection cloud that we just briefly described. The orchestration and the management must also be gaped, if you will. Otherwise, (laughs) you don't have an air gap. So all of this is really a cohort to cyber security or your cybersecurity strategy and posture, but you have to be careful here because your data protection strategy could get lost in this mess. So you want to think about the data protection cloud as again, an adjacency or maybe an overlay to your cybersecurity approach. Not a bolt on, it's got to be fundamentally architectured from the bottom up. And yes, this is going to maybe create some overheads and some integration challenges, but this is the way in which we think you should think about it. So you'll likely need a partner to do this. Again, we come back to the skill skills gap if we're seeing the rise of MSPs, managed service providers and specialist service providers. Not public cloud providers. People are concerned about lock-in, and that's really not their role. They're not high-touch services company. Probably not your technology arms dealer, (clear throat) excuse me, they're selling technology to these MSPs. So the MSPs, they have intimate relationships with their customers. They understand their business and specialize in architecting solutions to handle these difficult challenges. So let's take a look at some of the risk factors here, dig a little bit into the cyber threat that organizations face. This is a slide that, again, the Storage Alchemists, Steve Kenniston, shared with me. It's based on a study that IBM funds with the Panmore Institute, which is a firm that studies these things like cost of breaches and has for many, many, many years. The slide shows the total cost of a typical breach within each dot and on the Y axis and the frequency in percentage terms on the horizontal axis. Now, it's interesting. The top two compromise credentials and phishing, which once again proves that bad user behavior trumps good security every time. But the point here is that the adversary's attack vectors are many. And specific companies often specialize in solving these problems often with point products, which is why the slide that we showed from Optiv earlier, that messy slide, looks so cluttered. So there's a huge challenge for companies. And that's why we've seen the emergence of cyber recovery solutions from virtually all the major players. Ransomware and the solar winds hack have made trust the number one issue for CIOs and CISOs and boards of directors. Shifting CISO spending patterns are clear. They're shifting largely because they're catalyzed by the work from home. But outside of the moat to endpoint security, identity and access management, cloud security, the horizontal network security. So security priorities and spending are changing. And that's why you see the emergence of disruptors like we've covered extensively, Okta, CrowdStrike, Zscaler. And cyber resilience is top of mind, and robust solutions are required. And that's why companies are building cyber recovery solutions that are most often focused on the backup corpus because that's a target for the bad guys. So there is an opportunity, however, to expand from just the backup corpus to all data and protect this kind of 3, 2, 1, or maybe it's 3, 2, 1, 1, three copies, two backups, a backup in the cloud and one that's air gaped. So this can be extended to primary storage, copies, snaps, containers, data in motion, et cetera, to have a comprehensive data protection strategy. And customers, as I said earlier, are increasingly looking to manage service providers and specialists because of that skills gap. And that's a big reason why automation is so important in orchestration. And automation and orchestration, I'll emphasize, on the air gap solutions should be separated physically and logically. All right, now let's take a look at some of the ETR data and some of the players. This is a chart that we like to show often. It's a X-Y axis. And the Y axis is net score, which is a measure of spending momentum. And the horizontal axis is market share. Now, market share is an indicator of pervasiveness in the survey. It's not spending market share, it's not market share of the overall market, it's a term that ETR uses. It's essentially market share of the responses within the survey set. Think of it as mind share. Okay, you've got the pure plays here on this slide, in the storage category. There is no data protection or backup category. So what we've done is we've isolated the pure plays or close to pure plays in backup and data protection. Now notice that red line, that red is kind of our subjective view of anything that's over that 40% line is elevated. And you can see only Rubrik, and the July survey is over that 40% line. I'll show you the ends in a moment. Smaller ends, but still, Rubrik is the only one. Now, look at Cohesity and Rubrik in the January 2020. So last year, pre-pandemic, Cohesity and Rubrik, they've come well off their peak for net score. Look at Veeam. Veeam, having studied this data for the last say 24 hours months, Veeam has been steady Eddy. It is really always in the mid to high 30s, always shows a large shared end, so it's coming up in the survey. Customers are mentioning Veeam. And it's got a very solid net score. It's not above that 40% line, but it's hovering just below consistently. That's very impressive. Commvault has steadily been moving up. Sanjay Mirchandani has made some acquisitions. He did the Hedvig acquisition. They launched Metallic, that's driving cloud affinity within Commvault's large customer base. So it's good example of a legacy player pivoting and evolving and transforming itself. Veritas, it continues to under perform in the ETR surveys relative to the other players. Now, for context, let's add IBM and Dell to the chart. Now just note, this is IBM and Dell's full storage portfolio. The category in the taxonomy at ETR is all storage. Just previous slide, I isolated on the pure plays. But this now adds in IBM and Dell. It probably representative of where they would be. Probably Dell larger on the horizontal axis than IBM, of course. And you could see the spending momentum accordingly. So you can see that in the data chart that we've inserted. So some smaller ends for Rubrik and Cohesity. But still enough to pay attention, it's not like one or two. When you're 20-plus, 15-plus 25-plus, you can start to pay attention to trends. Veeam, again, is very impressive. It's net score is solid, it's got a consistent presence in the dataset, it's clear leader here. SimpliVity is small, but it's improving relative to last several surveys. And we talked about Convolt. Now, I want to emphasize something that we've been hitting on for quite some time now. And that's the Renaissance that's coming in compute. Now, we all know about Moore's Law, the doubling of transistor density every two years, 18 to 24 months. And that leads to a doubling of performance in that timeframe. X86, that x86 curve is in the blue. And if you do the math, this is expressed in trillions of operations per second. The orange line is representative of Apples A series, culminating in the A15, most recently. The A series is what Apple is now... Well, it's the technology basis for what's inside M1, the new Apple laptops, which is replacing Intel. That's that that orange line there, we'll come back to that. So go back to the blue line for a minute. If you do the math on doubling performance every 24 months, it comes out to roughly 40% annual improvement in processing power per year. That's now moderated. So Moore's Law is waning in one sense, so we wrote a piece Moore's Law is not dead. So I'm sort of contradicting myself there. But the traditional Moore's Law curve on x86 is waning. It's probably now down to around 30%, low 30s. But look at the orange line. Again, using the A series as an indicator, if you combine then the CPU, the NPU, which neuro processing unit, XPU, pick whatever PU you want, the accelerators, the DSPs, that line is growing at 100% plus per year. It's probably more accurately around 110% a year. So there's a new industry curve occurring, and it's being led by the Arm ecosystem. The other key factor there, and you're seeing this in a lot of use cases, a lot of consumer use cases, Apple is an example, but you're also seeing it in things like Tesla, Amazon with AWS graviton, the Annapurna acquisition, building out graviton and nitro, that's based on Arm. You can get from design to tape out in less than two years. Whereas the Intel cycles, we know, they've been running it four to five years now. Maybe Pat Gelsinger is compressing those. But Intel is behind. So organizations that are on that orange curve are going to see faster acceleration, lower cost, lower power, et cetera. All right, so what's the tie to data protection. I'm going to leave you with this chart. Arm has introduced it's confidential, compute architecture and is ushering in a new era of security and data protection. Zero trust is the new mandate. And what Arm has it's done with what they call realms is create physical separation of the vulnerable components by creating essentially physical buckets to put code in and to put data in, separate from the OS. Remember, the OS is the most valuable entry point for hackers or one of them because it contains privileged access, and it's a weak link because of things like memory leakages and vulnerabilities. And malicious code can be placed by bad guys within data in the OS and appear benign, even though it's anything but. So in this, all the OS does is create API calls to the realm controller. That's the only interaction. So it makes it much harder for bad actors to get access to the code and the data. And importantly, very importantly, it's an end-to-end architecture. So there's protection throughout. If you're pulling data from the edge and bringing it back to the on-prem or the cloud, you've got that end to end architecture and protection throughout. So the link to data protection is that backup software vendors need to be the most trusted of applications. Backup software needs to be the most trusted of applications because it's one of the most targeted areas in a cyber attack. Realms provide an end-to-end separation of data and code from the OS and it's a better architectural construct to support zero trust and confidential computing and critical use cases like data protection/backup and other digital business apps. So our call to action is backup software vendors, you can lead the charge. Arm is several years ahead at the moment, ahead of Intel, in our view. So you've got to pay attention to that, research that. We're not saying over rotate, but go investigate that. And use your relationships with Intel to accelerate its version of this architecture. Or ideally, the industry should agree on common standards and solve this problem together. Pat Gelsinger told us in theCUBE that if it's the last thing he's going to do in his industry life, he's going to solve this security problem. That's when he was at VMware. Well, Pat, you're even in a better place to do it now. You don't have to solve it yourself, you can't, and you know that. So while you're going about your business saving Intel, look to partner with Arm. I know it sounds crazy to use these published APIs and push to collaborate on an open source architecture that addresses the cyber problem. If anyone can do it, you can. Okay, that's it for today. Remember, these episodes are all available as podcasts. All you got to do is search Braking Analysis Podcast. I publish weekly on wikibond.com and siliconangle.com. Or you can reach me @dvellante on Twitter, email me at david.vellante@siliconangle.com. And don't forget to check out etr.plus for all the survey and data action. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching, everybody. Be well, and we'll see you next time. (gentle music)
SUMMARY :
This is braking analysis So the link to data protection
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve Kenniston | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Steve Kenniston | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Sanjay Mirchandani | PERSON | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
January 2020 | DATE | 0.99+ |
75 | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
Panmore Institute | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
100 billion dollars | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
18 | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
last year | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
$115 billion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Hashi Corp. | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
35% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
less than two years | QUANTITY | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
80-plus billion dollars | QUANTITY | 0.99+ |
Hashi Corp | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
24 months | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
A15 | COMMERCIAL_ITEM | 0.99+ |
Veeam | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
4000 BC | DATE | 0.98+ |
Moore's Law | TITLE | 0.98+ |
Convolt | ORGANIZATION | 0.98+ |
86 billion | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
first system | QUANTITY | 0.98+ |
Storage Alchemists | ORGANIZATION | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
pandemic | EVENT | 0.97+ |
Hedvig | ORGANIZATION | 0.97+ |
one sense | QUANTITY | 0.97+ |
Metallic | ORGANIZATION | 0.97+ |
Veritas | ORGANIZATION | 0.97+ |
ETR | ORGANIZATION | 0.97+ |
Okta | ORGANIZATION | 0.97+ |
Zscaler | ORGANIZATION | 0.96+ |
@dvellante | PERSON | 0.96+ |
each dot | QUANTITY | 0.96+ |
24 hours | QUANTITY | 0.95+ |
CrowdStrike | ORGANIZATION | 0.95+ |
Eddy | PERSON | 0.95+ |
two backups | QUANTITY | 0.95+ |
around 110% a year | QUANTITY | 0.95+ |
Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI
(upbeat music) >> Welcome back to HPE Discover 2021, theCube's virtual coverage, continuous coverage of HPE's annual customer event. My name is Dave Vellante and we're going to dive into the intersection of high-performance computing, data and AI with Dr. Eng Lim Goh who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Dr. Goh, great to see you again. Welcome back to theCube. >> Hey, hello, Dave. Great to talk to you again. >> You might remember last year we talked a lot about swarm intelligence and how AI is evolving. Of course you hosted the Day 2 keynotes here at Discover. And you talked about thriving in the age of insights and how to craft a data-centric strategy and you addressed some of the biggest problems I think organizations face with data. And that's, you got to look, data is plentiful, but insights, they're harder to come by and you really dug into some great examples in retail, banking, and medicine and healthcare and media. But stepping back a little bit we'll zoom out on Discover '21, you know, what do you make of the events so far and some of your big takeaways? >> Hmm, well, you started with the insightful question. Data is everywhere then but we lack the insight. That's also part of the reason why that's a main reason why, Antonio on Day 1 focused and talked about that, the fact that we are in the now in the age of insight and how to thrive in this new age. What I then did on the Day 2 keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights when you know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Very pertinent question, Dave. You know, the two challenges I spoke about how to, that we need to overcome in order to thrive in this new age, the first one is the current challenge. And that current challenge is, you know state of this, you know, barriers to insight, when we are awash with data. So that's a statement. How to overcome those barriers. One of the barriers to insight when we are awash in data, in the Day 2 keynote, I spoke about three main things, three main areas that receive from customers. The first one, the first barrier is with many of our customers, data is siloed. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing, and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a Federation layer above all those silos so that when you build applications above they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know, barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And it's tough to get to value out of them. In that case I use the example of the May 6, 2010 event where the stock market dropped a trillion dollars in tens of minutes. We all know those who are financially attuned with, know about this incident. But that this is not the only incident. There are many of them out there. And for that particular May 6, event, you know it took a long time to get insight, months, yeah, before we, for months we had no insight as to what happened, why it happened. And there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road to go with the tough data. Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road and analyze that data took a long time to assemble. And he discovered that there was quote stuffing. That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees had the rule in there that says all trades less than 100 shares don't need to report in there. And so what people did was sending a lot of less than 100 shares trades to fly under the radar to do this manipulation. So here is, here the second barrier. Data could be raw and disperse. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah. Here we have three quick examples of customers. One was a great example where they were trying to build a language translator a machine language translator between two languages. But in order to do that they need to get hundreds of millions of word pairs of one language compare with the corresponding other hundreds of millions of them. They say, "Where I'm going to get all these word pairs?" Someone creative thought of a willing source and huge source, it was a United Nations. You see, so sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data. The second one has to do with, there was the, sometimes you may just have to generate that data. Interesting one. We had an autonomous car customer that collects all these data from their cars. Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hail because that's a rare occurrence. So instead of waiting for a time where the car can drive in hail, they build a simulation by having the car collected in snow and simulated hail. So these are some of the examples where we have customers working to overcome barriers. You have barriers that is associated with the fact, that data silo, if federated barriers associated with data that's tough to get at. They just took the hard road. And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow, I tell you, I have about 100 questions based on what you just said. And as a great example, the flash crash in fact Michael Lewis wrote about this in his book, the "Flash Boys" and essentially. It was high frequency traders trying to front run the market and sending in small block trades trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. Can I guess my question is can technology help us get get out of the problem? And that maybe is where AI fits in. >> Yes. Yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, assemble them to see if you can find a material trend. You can see lots of trends. Like, no, we, if humans at things we tend to see patterns in clouds. So sometimes you need to apply statistical analysis, math to be sure that what the model is seeing is real. And that required work. That's one area. The second area is, you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. And in this case before the change in the rules. By the way, after the discovery, the authorities changed the rules and all shares all trades of different, any sizes it has to be reported. Not, yeah. But the rule was applied to to say earlier that shares under 100, trades under 100 shares need not be reported. So sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't, wanted for various reasons not to put everything in there so that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such there are times we just need to go back to the raw data. >> I want to ask you-- Or be it that it's going to be tough there. >> Yeah, so I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about and I'm going to make a statement. You tell me if it's on point or off point. Seems that most of the AI going on in the enterprise is modeling data science applied to troves of data. But there's also a lot of AI going on in consumer, whether it's fingerprint technology or facial recognition or natural language processing. Will, to two-part question, will the consumer market, let's say as it has so often in the enterprise sort of inform us is sort of first part. And then will there be a shift from sort of modeling, if you will, to more, you mentioned autonomous vehicles more AI inferencing in real-time, especially with the Edge. I think you can help us understand that better. >> Yeah, this is a great question. There are three stages to just simplify, I mean, you know, it's probably more sophisticated than that, but let's just simplify there're three stages to building an AI system that ultimately can predict, make a prediction. Or to assist you in decision-making, have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data. And the machine starts to evolve a model based on all the data is seeing it starts to evolve. To a point that using a test set of data that you have separately kept a site that you know the answer for. Then you test the model, you know after you're trained it with all that data to see whether his prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision and that's the inference. So a lot of times depending on what we are focusing on. We in data science are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you wanted to make. You pick the right models and then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that a robust, it is good, but then it is not consistent. Now what you do is you try another model. So sometimes you just keep trying different models until you get the right kind, yeah, that gives you a good robust decision-making and prediction. Now, after which, if it's tested well, Q8 you will then take that model and deploy it at the Edge, yeah. And then at the Edge is essentially just looking at new data applying it to the model that you have trained and then that model will give you a prediction or a decision. So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful, can you also do learning at the Edge? That's the reason why we spoke about swarm learning the last time, learning at the Edge as a swarm. Because maybe individually they may not have enough power to do so, but as a swarm, they may. >> Is that learning from the Edge or learning at the Edge. In other words, is it-- >> Yes. >> Yeah, you don't understand my question, yeah. >> That's a great question. That's a great question. So answer is learning at the Edge, and also from the Edge, but the main goal, the goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. So that you don't need to have to send all that data back and assemble it back from all the different Edge devices assemble it back to the Cloud side to do the learning. With swarm learning, you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send the autonomous vehicle example you gave is great 'cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front and then maybe they do that and then they send that smaller data set back and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming to, let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah, well, today full of these insightful questions that actually touches on the second challenge. How do we, to in order to thrive in this new age of insight. The second challenge is our future challenge. What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talk about what to collect, and when to organize it when you collect, and then where will your data be going forward that you are collecting from? So what, when, and where. For the what data, for what data to collect that was the question you asked. It's a question that different industries have to ask themselves because it will vary. Let me give you the, you use the autonomous car example. Let me use that and you have this customer collecting massive amounts of data. You know, we talking about 10 petabytes a day from a fleet of their cars and these are not production autonomous cars. These are training autonomous cars, collecting data so they can train and eventually deploy a commercial cars. Also these data collection cars, they collect 10 as a fleet of them collect 10 petabytes a day. And then when it came to us, building a storage system to store all of that data they realize they don't want to afford to store all of it. Now here comes the dilemma. What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma. Now in working with them on this process of trimming down what they collected. I'm constantly reminded of the 60s and 70s. To remind myself 60s and 70s, we call a large part of our DNA, junk DNA. Today we realized that a large part of that, what we call junk has function has valuable function. They are not genes but they regulate the function of genes. So what's junk in yesterday could be valuable today, or what's junk today could be valuable tomorrow. So there's this tension going on between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you know you worry, you ignore the wrong ones. You can see this tension in our customers. And then it depends on industry here. In healthcare they say, I have no choice. I want it all, why? One very insightful point brought up by one healthcare provider that really touched me was you know, we are not, we don't only care. Of course we care a lot. We care a lot about the people we are caring for. But we also care for the people we are not caring for. How do we find them? And therefore, they did not just need to collect data that they have with, from their patients they also need to reach out to outside data so that they can figure out who they are not caring for. So they want it all. So I asked them, "So what do you do with funding if you want it all?" They say they have no choice but they'll figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us, rightfully that you know, we have to then work out a way to to help them build a system. So that healthcare. And if you go to other industries like banking, they say they can afford to keep them all. But they are regulated same like healthcare. They are regulated as to privacy and such like. So many examples, different industries having different needs but different approaches to how, what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can store. But on the other hand you know, if you kind of don't want to afford it and decide not to store some, maybe those some become highly valuable in the future. You worry. >> Well, we can make some assumptions about the future, can't we? I mean we know there's going to be a lot more data than we've ever seen before, we know that. We know, well not withstanding supply constraints and things like NAND. We know the price of storage is going to continue to decline. We also know and not a lot of people are really talking about this but the processing power, everybody says, Moore's Law is dead. Okay, it's waning but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth, actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again another insightful question that we touched on, on our keynote and that goes up to the why, I'll do the where. Where will your data be? We have one estimate that says that by next year, there will be 55 billion connected devices out there. 55 billion. What's the population of the world? Well, off the order of 10 billion, but this thing is 55 billion. And many of them, most of them can collect data. So what do you do? So the amount of data that's going to come in is going to way exceed our drop in storage costs our increasing compute power. So what's the answer? The answer must be knowing that we don't and even a drop in price and increase in bandwidth, it will overwhelm the 5G, it'll will overwhelm 5G, given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all that data from the 55 billion devices of the data back out to a central, as a bunch of central cost because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you still be too expensive given the number of devices out there. You know given storage costs dropping it'll still be too expensive to try and install them all. So the answer must be to start at least to mitigate the problem to some leave most a lot of the data out there. And only send back the pertinent ones, as you said before. But then if you did that then, how are we going to do machine learning at the core and the Cloud side, if you don't have all the data you want rich data to train with. Sometimes you want to a mix of the positive type data, and the negative type data. So you can train the machine in a more balanced way. So the answer must be you eventually, as we move forward with these huge number of devices are at the Edge to do machine learning at the Edge. Today we don't even have power. The Edge typically is characterized by a lower energy capability and therefore, lower compute power. But soon, you know, even with low energy, they can do more with compute power, improving in energy efficiency. So learning at the Edge today we do inference at the Edge. So we data, model, deploy and you do inference at age. That's what we do today. But more and more, I believe given a massive amount of data at the Edge you have to have to start doing machine learning at the Edge. And if when you don't have enough power then you aggregate multiple devices' compute power into a swarm and learn as a swarm. >> Oh, interesting, so now of course, if I were sitting in a flyer flying the wall on HPE Board meeting I said, "Okay, HPE is a leading provider of compute." How do you take advantage that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products, but there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for your customers? >> The wall will have to have a balance. Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud side. And it has to be hybrid. And then we need capability on the Edge side. Yeah that we need to build systems that on one hand is Edge-adapted. Meaning they environmentally-adapted because the Edge differently are on it. A lot of times on the outside, they need to be packaging-adapted and also power-adapted. Because typically many of these devices are battery-powered. So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. They must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insightful for that. Antonio announced in 2018 for the next four years from 2018, $4 billion invested to strengthen our Edge portfolio our Edge product lines, Edge solutions. >> Dr. Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers. The diversity of customers and the diversity in the way they have approached their different problems with data strategy. So the excitement is around data strategy. Just like, you know, the statement made for us was so, was profound. And Antonio said we are in the age of insight powered by data. That's the first line. The line that comes after that is as such we are becoming more and more data-centric with data the currency. Now the next step is even more profound. That is, you know, we are going as far as saying that data should not be treated as cost anymore, no. But instead, as an investment in a new asset class called data with value on our balance sheet. This is a step change in thinking that is going to change the way we look at data, the way we value it. So that's a statement. So this is the exciting thing, because for me a CTO of AI, a machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. So that's why when the people start to value data and say that it is an investment when we collect it it is very positive for AI because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. So it'd be great if the community values data. >> Well, are you certainly see it in the valuations of many companies these days? And I think increasingly you see it on the income statement, you know data products and people monetizing data services, and yeah, maybe eventually you'll see it in the balance sheet, I know. Doug Laney when he was at Gartner Group wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? Dr. Goh. >> Yeah, yeah, yeah. Your question is the process and methods in valuation. But I believe we'll get there. We need to get started and then we'll get there, I believe, yeah. >> Dr. Goh it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh yeah, no doubt. People will better understand how to align some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCube. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (both chuckling) >> Well, excellent, we'll leave it there. Thank you for spending some time with us so keep it right there for more great interviews from HPE Discover '21. This is Dave Vellante for theCube, the leader in enterprise tech coverage. We'll be right back (upbeat music)
SUMMARY :
Dr. Goh, great to see you again. Great to talk to you again. and you addressed some and how to thrive in this new age. of the ones you talked about today? One of the barriers to insight And as a great example, the flash crash is that humans put in the rules to decide that it's going to be tough there. and it's something you know a lot about And the machine starts to evolve a model Is that learning from the Yeah, you don't So that you don't need to have but the rest can be done at the Edge. But on the other hand you know, And so when you think about and the Cloud side, if you I know today you are, you So you have to build about in the future as the data you feed it with. And I think increasingly you Your question is the process And then the AI will Dr. Goh, great to see you again. as the data you feed it with. Thank you for spending some time with us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Michael Lewis | PERSON | 0.99+ |
Doug Laney | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
10 billion | QUANTITY | 0.99+ |
$4 billion | QUANTITY | 0.99+ |
second challenge | QUANTITY | 0.99+ |
55 billion | QUANTITY | 0.99+ |
two languages | QUANTITY | 0.99+ |
two challenges | QUANTITY | 0.99+ |
May 6 | DATE | 0.99+ |
Flash Boys | TITLE | 0.99+ |
two-part | QUANTITY | 0.99+ |
55 billion | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Gartner Group | ORGANIZATION | 0.99+ |
second area | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
last year | DATE | 0.99+ |
less than 100 shares | QUANTITY | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
second barrier | QUANTITY | 0.99+ |
May 6, 2010 | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
first barrier | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
less than 100 share | QUANTITY | 0.99+ |
Dr. | PERSON | 0.99+ |
one model | QUANTITY | 0.99+ |
tens of minutes | QUANTITY | 0.98+ |
one area | QUANTITY | 0.98+ |
one language | QUANTITY | 0.98+ |
Edge | ORGANIZATION | 0.98+ |
three stages | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
first part | QUANTITY | 0.98+ |
one rule | QUANTITY | 0.98+ |
Goh | PERSON | 0.98+ |
Firstly | QUANTITY | 0.98+ |
first one | QUANTITY | 0.97+ |
United Nations | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
first barrier | QUANTITY | 0.97+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.96+ |
about 100 questions | QUANTITY | 0.96+ |
10 petabytes a day | QUANTITY | 0.95+ |
Day 2 | QUANTITY | 0.94+ |
Eng Lim Goh | PERSON | 0.94+ |
Day 1 | QUANTITY | 0.93+ |
under 100 | QUANTITY | 0.92+ |
Dr | PERSON | 0.92+ |
one estimate | QUANTITY | 0.91+ |
Sandeep Singh, HPE
(upbeat music) >> Hi everybody, this is Dave Volante. And with me is Sandeep Singh, he is the vice president of Storage Marketing at Hewlett Packard Enterprise. And we're going to riff on some of the trends in the industry, what we're seeing. And we got a little treat for you. Sandeep, great to see you man. >> Dave, it's a pleasure to be here. >> You and I've known each other for a long time. We've had some great discussions, some debates, some intriguing mind benders. What are you seeing out there in Storage? So much has changed. What are the key trends you're seeing and let's get into it. >> Yeah, across the board, as you said, so much has changed. When you reflect back at the underlying transformation that's taken place with data, cloud and AI across the board. First of all, for our customers they're seeing this massive data explosion that literally now spans edge to core to cloud. They're also seeing a diversity of the application workloads across the board. And the emphasis that it's placing is on the complexity that underlies overall infrastructure and data management. Across the board, we're hearing a lot from customers about just the underlying infrastructure complexity and the infrastructure sprawl. And then the second element of that is really extending into the complexity of data management. >> So it's interesting you're talking about data management. You remember you and I, we were in Andover. It was probably like five years ago and all we were talking about was media. Flash this and flash that, and at the time that was kind of the hot storage topic. Well, flash came in addressing some of the mics that we historically talked about it. Now the problem statement is really kind of quote unquote metaphorically moving up the stack if you will, you mentioned management but let's dig into that a little bit. I mean, what is management? I mean, a lot of people that means different things to different people. You talk to a database person or a backup person. How do you look at management? What does that mean to you? >> Yeah, Dave, you mentioned that the flash came in and it actually accelerated the overall speed and latency that storage was delivering to the application workloads. But fundamentally when you look back at storage over a couple of decades the underlying way of how you're managing storage hasn't fundamentally changed. There's still an incredible amount of complexity for IT. It's still a manual admin driven experience for customers. And what that's translating to is more often than not IT is in the world of firefighting and it's leaves them unable to help with them more strategic projects to innovate for the business. And basically IT has that pressure point of moving beyond that and helping bring greater levels of agility that line of business owners are asking for and to be able to deliver on more of the strategic projects. So that's one element of it. The second element that we're hearing from customers about is as more and more data just continues to explode from edge to core to cloud. And as basically the infrastructure has grown from just being on-Prem to being at the Edge to being in the cloud. Now that complexity is expanding from just being on-Prem to across multiple different clouds. So when you look across the date data life cycle how do you store it? How do you secure it? How do you basically protect it and archive it and analyze that data. That end to end life cycle management of data today resides on just a fragmented set of overall infrastructure and tools and processes and administrative boundaries. That's creating a massive challenge for customers. And the impact of that ultimately is essentially comes at a cost to agility, to innovation and ultimately business risk. >> Yeah, so we've seen obviously the cloud has addressed a lot of these problems but the problem is the cloud is in the cloud and much of my stuff, most of my stuff, isn't in the cloud. So I have all these other workloads that are either on-Prem and now you've got this emerging Edge. And so I wonder if we could just talk a little vision here for a minute. I mean what I've been envisioning is this abstraction layer that cuts across all weather. It doesn't really matter where it is. If it's on-Prem, if it's across cloud, if it's in the cloud, on the edge, we could talk about what that all means. But if customers that I talked to they're sort of done with the complexity of that underlying infrastructure. They want technology to take care of that. They want automation they want AI brought in to that equation. And it seems like we're from the cusp of the decade where that might happen. What's your take? >> Well, yeah, certainly I mentioned that data cloud and AI are really the disruptive forces, better propelling. The digital transformation for customers. Cloud has set the standard for agility and AI driven insights and intelligence are really helping to make the underlying infrastructure invisible and customers are looking for this notion of being able to get that cloud operational agility pretty much everywhere because they're discovering that that's a game changer. And yet a lot of their application workloads and data is on-Prem and is increasingly growing at the edge. So they want same experience to be able to truly bring that agility to wherever their data in absolute. And that's one of the things that we're continuing to hear from customers. >> And this problem is just going to get worse. I mean for decades we marched to the cadence of Moore's Law and everybody's going to forgets about Moore's Law. And say, "Ah, it's dying or whatever." But actually when you look at the processing power that's coming out now, it's more than doubling every two years, quadrupling every two years. So now you've got this capability in your hands and application design minors, storage companies, networking companies. They're going to have all this power to now bring in AI and do things that we've never even imagined before. So it's not about the box and the speeds and feeds of the box. It's really more about this abstraction layer that I was talking about. The management if you will that you were discussing and what we can do in terms of being able to power new workloads in machine intelligence, it's this kind of ubiquitous, call it the cloud but it's expanding pretty much everywhere in every part of our lives even to the edge you think about autonomous vehicles, you think about factories it's actually quite mind boggling where we're headed. >> It is and you touched upon AI. And certainly when you look at infrastructure, for example there's been a ton of complexity in infrastructure management. One of the studies that was done actually by IDC indicated that over 90% of the challenges that arise, for example ultimately down at the storage infrastructure layer that's powering the apps ultimately arises from way above the stack all the way from the server layer on down where even the virtual machine layer. And there, for example, AIOps for infrastructure has become a game changer for customers to be able to bring the power of AI and machine learning and multi-variate analysis to be able to predict and prevent issues. Dave, you also touched upon Edge and across the board. What we're seeing is the Enterprise Edge is becoming that frontier for customer experiences and the opportunity to reimagine customer experiences as well as just the frontier for commerce that's happening. When you look at retail and manufacturing and or financial services. So across the board with the data growth that's happening and this Edge becoming the strategic frontier for delivering the customer experiences how you power your application workloads there and how you deliver that data and protect that data and be able to seamlessly manage that overall infrastructure. As you mentioned abstracted away at a higher level becomes incredibly important for customers. >> So interesting to hear how the conversations changed. I'd like to say, I go back to whatever it was five years ago, we're talking about flash storage class memory, NVMe and those things are still there but your emphasis now you're talking about machine learning, AI, math around deep learning. It's really software is really what you're focusing on these days. >> Very much so. Certainly this notion of software and services that are delivering and unlocking a whole new experience for customers that's really the game changer going forward for customers. And that's what we're focused on. >> Well, I said we had a little surprise for you. So you guys are having an event on May 4th. It's called Unleash The Power of Data. What's that event all about Sandeep? >> Yeah. We are very much excited about our May 4th event. As you mentioned, it's called Unleash The Power of Data. And as most organizations today are data driven and data is at the heart of what they're doing. We're excited to invite everyone to join this event. And through this event we're unveiling a new vision for data that accelerates the data driven transformation from Edge to cloud. This event promises to be a pivotal event and one that IT admins, cloud architects, virtual machine admins, vice presidents, directors of IT and CIO really won't want to mess. Across the board this event is just bringing a new way of articulating the overall problem statement and in market in focused the articulation of the trends that we were just discussing. It's an event that's going to be hosted by a Business and Technology Journalist, Shabani Joshi. It will feature a market in panel with a focus on the crucial role that data is playing in customers digital transformation. It will also include and feature Antonio Neary, CEO of HPE and Tom black, senior vice president and general manager of HPE Storage Business and industry experts including Julia Palmer, research vice president at Gartner. We will unveil game changing HPE innovations that will make it possible for organizations across Edge to cloud to unleash the power of data. >> Sounds like great event. I presume I can go to hpe.com and get information, is it a registered event? How does that all work? Yeah, we invite everyone to visit hpe.com and by visiting there you can click and save the date of May 4th at 8:00 AM Pacific. We invite everyone to join us. We couldn't be more excited to get to this event and be able to share the vision and game-changing HPE innovations. >> Awesome. So I don't have to register, right? I don't have to give up my three children's name and my social security number to attend your event. Is that right? >> No registration required, come by, click on hpe.com. Save the date on your calendar. And we very much look forward to having everyone join us for this event. >> I love it, it's pure content event. I'm not going to get a phone call afterwards saying, "Hey, buy some stuff from me." That could come other channels but so that's good. Thank you for that. Thanks for providing that service to the industry. I'm excited to see what you guys are going to be announcing that day and look Sandeep. I mean, like I said, we've known each other a while. We've seen a lot of trends but the next 10 years it ain't going to look like the last 10 is it? >> It's going to be very different and we couldn't be more excited. >> Well, Sandeep, thanks so much for coming to theCube and riffing with me on the industry and giving us a preview for your event. Good luck with that. And always great to see you. >> Thanks a lot, Dave. Always great to see you as well. >> All right. And thank you everybody. This is Dave Volante for theCube and we'll see you next time. (upbeat music)
SUMMARY :
Sandeep, great to see you man. What are the key trends you're and the infrastructure sprawl. and at the time and to be able to deliver on But if customers that I talked to and AI are really the disruptive and everybody's going to and the opportunity to So interesting to hear how and services that are So you guys are having and data is at the heart and save the date of May I don't have to give up Save the date on your calendar. I'm excited to see what It's going to be very different And always great to see you. Always great to see you as well. And thank you everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Julia Palmer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Antonio Neary | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Sandeep | PERSON | 0.99+ |
Tom black | PERSON | 0.99+ |
Sandeep Singh | PERSON | 0.99+ |
Shabani Joshi | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
second element | QUANTITY | 0.99+ |
May 4th | DATE | 0.99+ |
Unleash The Power of Data | EVENT | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
three children | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
one element | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
over 90% | QUANTITY | 0.97+ |
every two years | QUANTITY | 0.95+ |
Moore's Law | TITLE | 0.9+ |
First | QUANTITY | 0.87+ |
May 4th at 8:00 AM Pacific | DATE | 0.85+ |
next 10 years | DATE | 0.84+ |
hpe.com | OTHER | 0.78+ |
hpe.com | ORGANIZATION | 0.77+ |
Andover | ORGANIZATION | 0.74+ |
CEO | PERSON | 0.71+ |
decades | QUANTITY | 0.65+ |
Edge | TITLE | 0.65+ |
theCube | ORGANIZATION | 0.61+ |
Edge | ORGANIZATION | 0.6+ |
Storage Marketing | ORGANIZATION | 0.59+ |
10 | QUANTITY | 0.54+ |
couple of decades | QUANTITY | 0.51+ |
doubling | QUANTITY | 0.48+ |
things | QUANTITY | 0.47+ |
Pradeep Sindhu, Fungible | theCUBE on Cloud 2021
>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. As I've said many times on the Cube for years, decades, even we've marched to the cadence of Moore's law, relying on the doubling of performance every 18 months or so. But no longer is this the mainspring of innovation for technology. Rather, it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build out of a massively distributed computer network. Very importantly, in the last several years, alternative processors have emerged to support offloading work and performing specific Test GP use of the most widely known example of this trend, with the ascendancy of in video for certain applications like gaming and crypto mining and, more recently, machine learning. But in the middle of last decade, we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years. As we move into the next era of Cloud. And with me is deep. Sindhu, who's this co founder and CEO of Fungible, a company specializing in the design and development of GPU deep Welcome to the Cube. Great to see you. >>Thank you, Dave. And thank you for having me. >>You're very welcome. So okay, my first question is, don't CPUs and GP use process data already? Why do we need a DPU? >>Um you know that that is a natural question to ask on. CPUs have been around in one form or another for almost, you know, 55 maybe 60 years. And, uh, you know, this is when general purpose computing was invented, and essentially all CPI use went to x 80 60 x 86 architecture. Uh, by and large arm, of course, is used very heavily in mobile computing, but x 86 primarily used in data center, which is our focus. Um, now, you can understand that that architectural off general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time, uh, improvements you refer the Moore's Law, which is really the improvements off the price performance off silicon over time. Um, that, combined with architectural improvements, was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. Uh, you're not going to get very much. You're not going to squeeze more blood out of that storm from the general purpose computer architectures. What has also happened over the last decade is that Moore's law, which is essentially the doubling off the number of transistors, um, on a chip has slowed down considerably on and to the point where you're only getting maybe 10 20% improvements every generation in speed off the grandest er. If that. And what's happening also is that the spacing between successive generations of technology is actually increasing from 2, 2.5 years to now three, maybe even four years. And this is because we are reaching some physical limits in Seamus. Thes limits are well recognized, and we have to understand that these limits apply not just to general purpose if use, but they also apply to GP use now. General purpose, if used, do one kind of confrontation. They really general on bacon do lots and lots of different things. It is actually a very, very powerful engine, Um, and then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of processor called the GPU, which specializes in executing vector floating point arithmetic operations much, much better than CPL. Maybe 2030 40 times better. Well, GPS have now been around for probably 15, 20 years, mostly addressing graphics computations. But recently, in the last decade or so, they have been used heavily for AI and analytics computations. So now the question is, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago, and I recognize I was still at Juniper Networks, which is another company that I found it. I recognize that in the data center, um, as the workload changes due to addressing Mawr and Mawr, larger and larger corpus is of data number one. And as people use scale out as the standard technique for building applications, what happens is that the amount of East West traffic increases greatly. And what happens is that you now have a new type off workload which is coming, and today probably 30% off the workload in a data center is what we call data centric. I want to give you some examples of what is the data centric E? >>Well, I wonder if I could interrupt you for a second, because Because I want you to. I want those examples, and I want you to tie it into the cloud because that's kind of the topic that we're talking about today and how you see that evolving. It's a key question that we're trying to answer in this program. Of course, Early Cloud was about infrastructure, a little compute storage, networking. And now we have to get to your point all this data in the cloud and we're seeing, by the way, the definition of cloud expand into this distributed or I think the term you use is disaggregated network of computers. So you're a technology visionary, And I wonder, you know how you see that evolving and then please work in your examples of that critical workload that data centric workload >>absolutely happy to do that. So, you know, if you look at the architectural off cloud data centers, um, the single most important invention was scale out scale out off identical or near identical servers, all connected to a standard i p Internet network. That's that's the architectural. Now, the building blocks of this architecture er is, uh, Internet switches, which make up the network i p Internet switches. And then the servers all built using general purpose X 86 CPUs with D ram with SSD with hard drives all connected, uh, inside the CPU. Now, the fact that you scale these, uh, server nodes as they're called out, um, was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose computer? But this architectures, Dave, is it compute centric architectures and the reason it's a compute centric architectures. If you open this a server node, what you see is a connection to the network, typically with a simple network interface card. And then you have CP use, which are in the middle of the action. Not only are the CPUs processing the application workload, but they're processing all of the aisle workload, what we call data centric workload. And so when you connect SSD and hard drives and GPU that everything to the CPU, um, as well as to the network, you can now imagine that the CPUs is doing to functions it z running the applications, but it's also playing traffic cop for the I O. So every Io has to go to the CPU and you're executing instructions typically in the operating system, and you're interrupting the CPU many, many millions of times a second now. General Purpose CPUs and the architecture of the CPS was never designed to play traffic cop, because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's. It's critical that in this new architecture, where there's a lot of data, a lot of East West traffic, the percentage of work clothes, which is data centric, has gone from maybe 1 to 2% to 30 to 40%. I'll give you some numbers, which are absolutely stunning if you go back to, say, 1987 and which is, which is the year in which I bought my first personal computer. Um, the network was some 30 times slower. Then the CPI. The CPI was running at 50 megahertz. The network was running at three megabits per second. Well, today the network runs at 100 gigabits per second and the CPU clock speed off. A single core is about 3 to 2.3 gigahertz. So you've seen that there is a 600 x change in the ratio off I'll to compute just the raw clock speed. Now you can tell me that. Hey, um, typical CPUs have lots of lots, of course, but even when you factor that in, there's bean close toe two orders of magnitude change in the amount of ill to compute. There is no way toe address that without changing the architectures on this is where the DPU comes in on the DPU actually solves two fundamental problems in cloud data centers on these air. Fundamental. There's no escaping it, no amount off. Clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architectures the interactions between server notes are very inefficient. Okay, that's number one problem number one. Problem number two is that these data center computations and I'll give you those four examples the network stack, the storage stack, the virtualization stack and the security stack. Those four examples are executed very inefficiently by CBS. Needless to say that that if you try to execute these on GPS, you'll run into the same problem, probably even worse because GPS are not good at executing these data centric computations. So when U. S o What we were looking to do it fungible is to solve these two basic problems and you don't solve them by by just using taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the for the last 40 years. So what we did was we created this new microprocessor that we call the DPU from ground doctor is a clean sheet design and it solve those two problems. Fundamental. >>So I want to get into that. But I just want to stop you for a second and just ask you a basic question, which is so if I understand it correctly, if I just took the traditional scale out, If I scale out compute and storage, you're saying I'm gonna hit a diminishing returns, It z Not only is it not going to scale linear linearly, I'm gonna get inefficiencies. And that's really the problem that you're solving. Is that correct? >>That is correct. And you know this problem uh, the workloads that we have today are very data heavy. You take a I, for example, you take analytics, for example. It's well known that for a I training, the larger the corpus of data relevant data that you're training on, the better the result. So you can imagine where this is going to go, especially when people have figured out a formula that, hey, the more data I collect, I can use those insights to make money. >>Yeah, this is why this is why I wanted to talk to you, because the last 10 years we've been collecting all this data. Now I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research and the first thing people said is they want to improve their infrastructure on. They want to do that by moving to the cloud, and they also there was a security angle there as well. That's a whole nother topic. We could discuss the other staff that jumped out at me. There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs with alternative processing technology. So that's sort of, you know, I know it's self serving, but z right on the conversation we're having. So I >>want to >>understand the architecture. Er, aan den, how you've approached this, You've you've said you've clearly laid out the X 86 is not going to solve this problem. And even GP use are not going to solve this problem. So help us understand the architecture and how you do solve this problem. >>I'll be I'll be very happy to remember I use this term traffic cough. Andi, I use this term very specifically because, uh, first let me define what I mean by a data centric computation because that's the essence off the problem resolved. Remember, I said two problems. One is we execute data centric work clothes, at least in order of magnitude, more efficiently than CPUs or GPS, probably 30 times more efficiently on. The second thing is that we allow notes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first, let's look at the data centric piece, the data centric piece, um, for for workload to qualify as being data centric. Four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads, so I'm not saying anything new. Secondly, uh, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently. Thousands of them. Yeah, that's number two. So a lot of multiplexing number three is that this workload is state fel. In other words, you have to you can't process back. It's out of order. You have to do them in order because you're terminating network sessions on the last one Is that when you look at the actual computation, the ratio off I Oto arithmetic is medium to high. When you put all four of them together, you actually have a data centric workout, right? And this workload is terrible for general purpose, C p s not only the general purpose, C p is not executed properly. The application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them, you're going to be in trouble. So what did we do? Well, what we did was our architecture consists off very, very heavily multi threaded, general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some some of those accelerators, um, de Emma accelerators, then radio coding accelerators, compression accelerators, crypto accelerators, um, compression accelerators, thes air, just something. And then look up accelerators. These air functions that if you do not specialized, you're not going to execute them efficiently. But you cannot just put accelerators in there. These accelerators have to be multi threaded to handle. You know, we have something like 1000 different threads inside our DPU toe address. These many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that that is very important to understand is that given the paucity off transistors, I know that we have hundreds of billions of transistors on a chip. But the problem is that those transistors are used very inefficiently today. If the architecture, the architecture of the CPU or GPU, what we have done is we've improved the efficiency of those transistors by 30 times. Yeah, so you can use >>the real estate. You can use their real estate more effectively, >>much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that, you know, we're gonna end up in the same bucket where General Focus CPS are today. We were trying to solve the specific problem off data centric computations on off improving the note to note efficiency. So let me go to Point number two, because that's equally important, because in a scale out architecture, the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that, you start to get back. It drops because there are some fundamental problems caused by congestion on the network, which are unsolved as we speak today. There only one solution, which is to use DCP well. DCP is a well known is part of the D. C. P I. P. Suite. DCP was never designed to handle the agencies and speeds inside data center. It's a wonderful protocol, but it was invented 42 year 43 years ago, now >>very reliable and tested and proven. It's got a good track record, but you're a >>very good track record, unfortunately, eats a lot off CPU cycles. So if you take the idea behind TCP and you say, Okay, what's the essence of TCP? How would you apply to the data center? That's what we've done with what we call F C P, which is a fabric control protocol which we intend toe open way. Intend to publish standards on make it open. And when you do that and you you embed F c p in hardware on top of his standard I P Internet network, you end up with the ability to run at very large scale networks where the utilization of the network is 90 to 95% not 20 to 25% on you end up with solving problems of congestion at the same time. Now, why is this important today that zall geek speak so far? But the reason this stuff is important is that it such a network allows you to disaggregate pool and then virtualized, the most important and expensive resource is in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like the Ram wants to be disaggregated in food. Well, if I put everything inside a general purpose server, the problem is that those resource is get stranded because they're they're stuck behind the CPI. Well, once you disaggregate those resources and we're saying hyper disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources >>and then you're gonna re aggregate them, right? I mean, that's >>obviously exactly and the network is the key helping. So the reason the company is called fungible is because we are able to disaggregate virtualized and then pull those resources and you can get, you know, four uh, eso scale out cos you know the large aws Google, etcetera. They have been doing this aggregation and pulling for some time, but because they've been using a compute centric architecture, er that this aggregation is not nearly as efficient as we could make on their off by about a factor of three. When you look at enterprise companies, they're off by any other factor of four. Because the utilization of enterprises typically around 8% off overall infrastructure, the utilization the cloud for A W S and G, C, P and Microsoft is closer to 35 to 40%. So there is a factor off almost, uh, 4 to 8, which you can gain by disaggregated and pulling. >>Okay, so I wanna interrupt again. So thes hyper scaler zehr smart. A lot of engineers and we've seen them. Yeah, you're right. They're using ah, lot of general purpose. But we've seen them, uh, move Make moves toward GP use and and embrace things like arm eso I know, I know you can't name names but you would think that this is with all the data that's in the cloud again Our topic today you would think the hyper scaler zehr all over this >>all the hyper scale is recognized it that the problems that we have articulated are important ones on they're trying to solve them. Uh, with the resource is that they have on all the clever people that they have. So these air recognized problems. However, please note that each of these hyper scale er's has their own legacy now they've been around for 10, 15 years, and so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some >>point. Have technical debt. You mean they >>have? I'm not going to say they have technical debt, but they have a certain way of doing things on. They are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you heard the term smart neck, and all your listeners must have heard that term. Well, a smart thing is not a deep you what a smart Nick is. It's simply taking general purpose arm cores put in the network interface on a PC interface and integrating them all in the same chip and separating them from the CPI. So this does solve the problem. It solves the problem off the data centric workload, interfering with the application work, work. Good job. But it does not address the architectural problem. How to execute data centric workloads efficiently. >>Yeah, it reminds me. It reminds me of you I I understand what you're saying. I was gonna ask you about smart. Next. It does. It's almost like a bridge or a Band Aid. It's always reminds me of >>funny >>of throwing, you know, a flash storage on Ah, a disc system that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. I don't know if it's a valid analogy, but we've seen this in computing for a long time. >>Yeah, this analogy is close because, you know. Okay, so let's let's take hyper scaler X. Okay, one name names. Um, you find that, you know, half my CPUs are twiddling their thumbs because they're executing this data centric workload. Well, what are you going to do? All your code is written in, uh, C c plus plus, um, on x 86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different Let's say we use arm simply because you know x 86 licenses are not available to people to build their own CPUs. So arm was available, so they put a bunch of encores. Let's stick a PC. I express and network interface on you. Port that quote from X 86 Tow arm. Not difficult to do, but it does yield you results on, By the way, if, for example, um, this hyper scaler X shall we call them if they're able to remove 20% of the workload from general purpose CPUs? That's worth billions of dollars. So of course you're going to do that. It requires relatively little innovation other than toe for quote from one place to another place. >>That's what that's what. But that's what I'm saying. I mean, I would think again. The hyper scale is why Why can't they just, you know, do some work and do some engineering and and then give you a call and say, Okay, we're gonna We're gonna attack these workloads together. You know, that's similar to how they brought in GP use. And you're right. It's it's worth billions of dollars. You could see when when the hyper scale is Microsoft and and Azure, uh, and and AWS both announced, I think they depreciated servers now instead of four years. It's five years, and it dropped, like a billion dollars to their bottom line. But why not just work directly with you guys. I mean, Z the logical play. >>Some of them are working with us. So it's not to say that they're not working with us. So you know, all of the hyper scale is they recognize that the technology that we're building is a fundamental that we have something really special, and moreover, it's fully programmable. So you know, the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit, which is on the boundary off a server, and the network is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architectural where the functionality is programmable? But it is also very high speed for this particular set of applications. So the analogy with GPS is nearly perfect because GP use, and particularly in video that's implemented or they invented coulda, which is a programming language for GPS on it made them easy to use mirror fully programmable without compromising performance. Well, this is what we're doing with DP use. We've invented a new architectures. We've made them very easy to program. And they're these workloads or not, Workload. The computation that I talked about, which is security virtualization storage and then network. Those four are quintessential examples off data centric, foreclosed on. They're not going away. In fact, they're becoming more and more and more important over time. >>I'm very excited for you guys, I think, and really appreciate deep we're gonna have you back because I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, crypto accelerators. I want to understand that. I know there's envy me in here. There's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now, uh, into this domain, this build out of this I like this term dis aggregated, massive disaggregated network s so hyper disaggregated. Even better. And I would say this on way. I gotta go. But what got us here the last decade is not the same is what's gonna take us through the next decade. Pretty Thanks. Thanks so much for coming on the cube. It's a great company. >>You have it It's really a pleasure to speak with you and get the message of fungible out there. >>E promise. Well, I promise we'll have you back and keep it right there. Everybody, we got more great content coming your way on the Cube on Cloud, This is David. Won't stay right there.
SUMMARY :
a company specializing in the design and development of GPU deep Welcome to the Cube. So okay, my first question is, don't CPUs and GP use process And for the longest time, uh, improvements you refer the Moore's Law, the definition of cloud expand into this distributed or I think the term you use is disaggregated change in the amount of ill to compute. But I just want to stop you for a second and just ask you a basic So you can imagine where this is going to go, There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs and how you do solve this problem. sessions on the last one Is that when you look at the actual computation, the real estate. centric computations on off improving the note to note efficiency. but you're a disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can and then pull those resources and you can get, you know, four uh, all the data that's in the cloud again Our topic today you would think the hyper scaler all the hyper scale is recognized it that the problems that we have articulated You mean they of course, you heard the term smart neck, and all your listeners must have heard It reminds me of you I I understand what you're saying. that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. Well, the easiest thing to do is to separate the cores that run this workload. you know, do some work and do some engineering and and then give you a call and say, And so the whole trick is how do you come up I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, You have it It's really a pleasure to speak with you and get the message of fungible Well, I promise we'll have you back and keep it right there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
20% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Sindhu | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
50 megahertz | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
30 times | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
1 | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
55 | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
Pradeep Sindhu | PERSON | 0.99+ |
David | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
two problems | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
600 x | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
60 years | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
billion dollars | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
30 | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
four | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
1987 | DATE | 0.99+ |
1000 different threads | QUANTITY | 0.99+ |
First | QUANTITY | 0.98+ |
Fungible | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
8 | QUANTITY | 0.98+ |
25% | QUANTITY | 0.98+ |
Four things | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
35 | QUANTITY | 0.98+ |
one solution | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
around 8% | QUANTITY | 0.97+ |
third element | QUANTITY | 0.97+ |
Secondly | QUANTITY | 0.97+ |
95% | QUANTITY | 0.97+ |
billions of dollars | QUANTITY | 0.97+ |
100 gigabits per second | QUANTITY | 0.97+ |
hundreds of billions of transistors | QUANTITY | 0.97+ |
2.3 gigahertz | QUANTITY | 0.97+ |
single core | QUANTITY | 0.97+ |
2030 | DATE | 0.97+ |
4 | QUANTITY | 0.96+ |
Cuban | OTHER | 0.96+ |
2% | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
Moore | PERSON | 0.95+ |
last decade | DATE | 0.95+ |
three megabits per second | QUANTITY | 0.95+ |
10 20% | QUANTITY | 0.95+ |
42 year | DATE | 0.94+ |
both | QUANTITY | 0.94+ |
40 times | QUANTITY | 0.93+ |
two fundamental problems | QUANTITY | 0.92+ |
15 years | QUANTITY | 0.92+ |
Problem number two | QUANTITY | 0.91+ |
two basic problems | QUANTITY | 0.9+ |
43 years ago | DATE | 0.9+ |
86 | OTHER | 0.9+ |
one place | QUANTITY | 0.9+ |
one side | QUANTITY | 0.89+ |
first personal computer | QUANTITY | 0.89+ |
Simon Crosby Dirty | Cube On Cloud
>> Hi, I'm Stu Miniman, and welcome back to theCUBE on Cloud talking about really important topics as to how developers, were changing how they build their applications, where they live, of course, long discussion we've had for a number of years. You know, how do things change in hybrid environments? We've been talking for years, public cloud and private cloud, and really excited for this session. We're going to talk about how edge environment and AI impact that. So happy to welcome back one of our CUBE alumni, Simon Crosby, is currently the Chief Technology Officer with Swim. He's got plenty of viewpoints on AI, the edge and knows the developer world well. Simon, welcome back. Thanks so much for joining us. >> Thank you, Stu, for having me. >> All right, so let's start for a second. Let's talk about developers. You know, it used to be, you know, for years we talked about, you know, what's the level of abstraction we get. Does it sit, you know, do I put it on bare metal? Do I virtualize it? Do I containerize it? Do I make it serverless? A lot of those things, you know that the app developer doesn't want to even think about but location matters a whole lot when we're talking about things like AI where do I have all my data that I could do my training? Where do I actually have to do the processing? And of course, edge just changes by orders of magnitude. Some of the things like latency and where data lives and everything like that. So with that as a setup, would love to get just your framework as to what you're hearing from developers and what we'll get into some of the solutions that you and your team are helping them to do their jobs. >> Well, you're absolutely right, Stu. The data onslaught is very real. Companies that I deal with are facing more and more real-time data from products from their infrastructure, from their partners whatever it happens to be and they need to make decisions rapidly. And the problem that they're facing is that traditional ways of processing that data are too slow. So perhaps the big data approach, which by now is a bit old, it's a bit long in the tooth, where you store data and then you analyze it later, is problematic. First of all, data streams are boundless. So you don't really know when to analyze, but second you can't store it all. And so the store then analyze approach has to change and Swim is trying to do something about this by adopting a process of analyze on the fly, so as data is generated, as you receive events you don't bother to store them. You analyze them, and then if you have to, you store the data, but you need to analyze as you receive data and react immediately to be able to generate reasonable insights or predictions that can drive commerce and decisions in the real world. >> Yeah absolutely. I remember back in the early days of big data, you know, real time got thrown around a little but it was usually I need to react fast enough to make sure we don't lose the customer, react to something, but it was, we gather all the data and let's move compute to the data. Today as you talk about, you know, real time streams are so important. We've been talking about observability for the last couple of years to just really understand the systems and the outputs more than looking back historically at where things were waiting for alerts. So could you give us some examples if you would, as to you know, those streams, you know, what is so important about being able to interact and leverage that data when you need it? And boy, it's great if we can use it then and not have to store it and think about it later, obviously there's some benefits there, because-- >> Well every product nowadays has a CPU, right? And so there's more and more data. And just let me give you an example, Swim processes real-time data from more than a hundred million mobile devices in real time, for a mobile operator. And what we're doing there is we're optimizing connection quality between devices and the network. Now that volume of data is more than four petabytes per day, okay. Now there is simply no way you can ever store that and analyze it later. The interesting thing about this is that if you adopt and analyze, and then if you really have to store architecture, you get to take advantage of Moore's Law. So you're running at CPU memory speeds instead of at disk speed. And so that gives you a million fold speed up, and it also means you don't have the latency problem of reaching out to, or about storage, database, or whatever. And so that reduces costs. So we can do it on about 10% of the infrastructure that they previously had for Hadoop style implementation. >> So, maybe it would help if we just explain. When we say edge people think of a lot of different things, is it, you know an IOT device sitting out at the edge? Are we talking about the Telecom edge? We've been watching AWS for years, you know, spider out their services and into various environments. So when you talk about the type of solutions you're doing and what your customers have, is it the Telecom edge? Is it the actual device edge, you know, where does processing happen and where do these you know, services that work on it live? >> So I think the right way to think about edge is where can you reasonably process the data? And it obviously makes sense to process data at the first opportunity you have, but much data is encrypted between the original device, say, and the application. And so edge as a place doesn't make as much sense as edge as an opportunity to decrypt and analyze data in the clear. So edge computing is not so much a place in my view as the first opportunity you have to process data in the clear and to make sense of it. And then edge makes sense, in terms of latency, by locating, compute, as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users, you know, quickly. So edge for me often is the cloud. >> Excellent, one of the other things I think about back from, you know, the big data days or even earlier, it was that how long it took to get from the raw data to processing that data, to be able to getting some insight, and then being able to take action. It sure sounds like we're trying to collapse that completely, is that, you know, how do we do that? You know, can we actually, you know, build the system so that we can, you know, in that real time, continuous model that you talk about, you know. Take care of it and move on. >> So one of the wonderful things, one of the wonderful things about cloud computing is that two major abstractions have really served us. And those are rest, which is static disk computing, and databases. And rest means any old server can do the job for me and then the database is just an API call away. The problem with that is that it's desperately slow. So when I say desperately slow, I mean, it's probably thrown away the last 10 years of Moore's law. Just think about it this way. Your CPU runs at gigahertz and the network runs at milliseconds. So by definition, every time you reach out to a data store you're going a million times slower than your CPU. That's terrible. It's absolutely tragic, okay. So a model which is much more effective is to have an in-memory computer architecture in which you engage in staple computation. So instead of having to reach out to a database every time to update the database and whatever, you know, store something, and then fetch it again a few moments later when the next event arrives, you keep state in memory and you compute on the fly as data arrives. And that way you get a million times speed up. You also end up with this tremendous cost reduction because you don't end up with as many instances having to compute, by comparison. So let me give you a quick example. If you go to a traffic.swim.ai you can see the real time state of the traffic infrastructure in Palo Alto. And each one of those intersections is predicting its own future. Now, the volume of data from just a few hundred lights in Palo Alto is about four terabytes a day. And sure you can deal with this in AWS Lambda. There are lots and lots of servers up there. But the problem is that the end to end per event latency is about 100 milliseconds. And, you know, if I'm dealing with 30,000 events a second, that's just too much. So solving that problem with a stateless architecture is extraordinarily expensive, more than $5,000 a month. Whereas the staple architecture which you could think of as an evolution of, you know, something reactive or the actor model, gets you, you know something like a 10th of the cost, okay. So cloud is fabulous for things that need to scale wide but a staple model is required for dealing with things which update you rapidly or regularly about their changes in state. >> Yeah, absolutely. You know, I think about if, I mentioned before AI training models, often, if you look at something like autonomous vehicles, the massive amounts of data that it needs to process, you know, has to happen in the public cloud. But then that gets pushed back down to the end device, in this case it's a car, because it needs to be able to react in real time and gets fed at a regular update, the new training algorithms that it has there. What are you seeing-- >> I have strong reason on this training approach and data science in general, and that is that there aren't enough data scientists or, you know, smart people to train these algorithms, deploy them to the edge and so on. And so there is an alternative worldview which is a much simpler one and that is that relatively simple algorithms deployed at scale to staple representatives, let's call them digital twins of things, can deliver enormous improvements in behavior as things learn for themselves. So the way I think the, at least this edge world, gets smarter is that relatively simple models of things will learn for themselves, create their own futures, based on what they can see and then react. And so this idea that we have lots and lots of data scientists dealing with vast amounts of information in the cloud is suitable for certain algorithms but it doesn't work for the vast majority of applications. >> So where are we with the state of what, what do developers need to think about? You mentioned that there's compute in most devices. That's true, but, you know, do they need some special Nvidia chip set out there? Are there certain programming languages that you are seeing more prevalent, interoperability, give us a little bit of, you know, some tips and tricks for those developing. >> Super, so number one, a staple architecture is fundamental and sure React is well known and there are ACA for example, and Spurling. Swim is another so I'm going to use some language and I would encourage you to look at swimos.org to go from play there. A staple architecture, which allows actors, small concurrent objects to stapely evolve their own state based on updates from the real world is fundamental. By the way, in Swim we use data to build these models. So these little agents, for things, we call them web agents because the object ID is a URI, they stapley evolve by processing their own real-world data, stapley representing it, And then they do this wonderful thing which is build a model on the fly. And they build a model by linking to things that they're related to. So a need section would link to all of its sensors but it would also link to all of its neighbors because the neighbors and linking is like a sub in Pub/Sub, and it allows that web agent then to continually analyze, learn, and predict on the fly. And so every one of these concurrent objects is doing this job of analyzing its own raw data and then predicting from that and streaming the result. So in Swim, you get streamed raw data in and what streams out is predictions, predictions about the future state of the infrastructure. And that's a very powerful staple approach which can run all their memory, no storage required. By the way, it's still persistent, so if you lose a node, you can just come back up and carry on but there's no need to store huge amounts of raw data if you don't need it. And let me just be clear. The volumes of raw data from the real world are staggering, right? So four terabytes a day from Palo Alto, but Las Vegas about 60 terabytes a day from the traffic lights. More than 100 million mobile devices is tens of petabytes per day, which is just too much to store. >> Well, Simon, you've mentioned that we have a shortage when it comes to data scientists and the people that can be involved in those things. How about from the developers side, do most enterprises that you're talking to do they have the skillset? Is the ecosystem mature enough for the company to get involved? What do we need to do looking forward to help companies be able to take advantage of this opportunity? >> Yeah, so there is this huge challenge in terms of, I guess, just cloud native skills. And this is exacerbated the more you get added to. I guess what you could think of is traditional kind of companies, all of whom have tons and tons of data sources. So we need to make it easy and Swim tries to do this by effectively using skills that people already have, Java or JavaScript, and giving them easy ways to develop, deploy, and then run applications without thinking about them. So instead of binding developers to notions of place and where databases are and all that sort of stuff if they can write simple object-oriented programs about things like intersections and push buttons, and pedestrian lights, and inroad loops and so on, and simply relate basic objects in the world to each other then we let data build the model by essentially creating these little concurrent objects for each thing, and they will then link to each other and solve the problem. We end up solving a huge problem for developers too, which is that they don't need to acquire complicated cloud-native skillsets to get to work. >> Well absolutely, Simon, it's something we've been trying to do for a long time is to truly simplify things. Want to let you have the final word. If you look out there, the opportunity, the challenge in the space, what final takeaways would you give to our audience? >> So very simple. If you adopt a staple competing architecture, like Swim, you get to go a million times faster. The applications always have an answer. They analyze, learn and predict on the fly and they go a million times faster. They use 10% less, no, sorry, 10% of the infrastructure of a store than analyze approach. And it's the way of the future. >> Simon Crosby, thanks so much for sharing. Great having you on the program. >> Thank you, Stu. >> And thank you for joining I'm Stu Miniman, thank you, as always, for watching theCUBE.
SUMMARY :
So happy to welcome back that you and your team and then you analyze it and leverage that data when you need it? And so that gives you a Is it the actual device edge, you know, at the first opportunity you have, so that we can, you and whatever, you know, store something, you know, has to happen or, you know, smart people that you are seeing more and I would encourage you for the company to get involved? the more you get added to. Want to let you have the final word. And it's the way of the future. Great having you on the program. And thank you for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Schaffer | PERSON | 0.99+ |
Asim Khan | PERSON | 0.99+ |
Steve Ballmer | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David Torres | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Simon Crosby | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Simon | PERSON | 0.99+ |
Peter Sheldon | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Magento | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
PagerDuty | ORGANIZATION | 0.99+ |
CeCe | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
sixty percent | QUANTITY | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
NYC | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
3.5% | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
John | PERSON | 0.99+ |
48 hours | QUANTITY | 0.99+ |
34% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
1.7% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
fifteen percent | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
10th | QUANTITY | 0.99+ |
36 hours | QUANTITY | 0.99+ |
CSC | ORGANIZATION | 0.99+ |
Angry Birds | TITLE | 0.99+ |
700 servers | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
200 servers | QUANTITY | 0.99+ |
ten percent | QUANTITY | 0.99+ |
Suki Kunta | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
20 bars | QUANTITY | 0.99+ |
300,000 people | QUANTITY | 0.99+ |
Itumeleng Monale, Standard Bank | IBM DataOps 2020
from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome back to the cube this is Dave Volante and you're watching a special presentation data ops enacted made possible by IBM you know what's what's happening is the innovation engine in the IT economy is really shifted used to be Moore's Law today it's applying machine intelligence and AI to data really scaling that and operationalizing that new knowledge the challenges that is not so easy to operationalize AI and infuse it into the data pipeline but what we're doing in this program is bringing in practitioners who have actually had a great deal of success in doing just that and I'm really excited to have it Kumal a Himalayan Manali is here she's the executive head of data management or personal and business banking at Standard Bank of South Africa the tomb of length thanks so much for coming in the queue thank you for having me Dave you're very welcome and first of all how you holding up with this this bovid situation how are things in Johannesburg um things in Johannesburg are fine we've been on lockdown now I think it's day 33 if I'm not mistaken lost count and but we're really grateful for the swift action of government we we only I mean we have less than 4,000 places in the country and infection rate is is really slow so we've really I think been able to find the curve and we're grateful for being able to be protected in this way so all working from home or learning the new normal and we're all in this together that's great to hear why don't you tell us a little bit about your your role you're a data person we're really going to get into it but here with us you know how you spend your time okay well I head up a date operations function and a data management function which really is the foundational part of the data value chain that then allows other parts of the organization to monetize data and liberate it as as as the use cases apply we monetize it ourselves as well but really we're an enterprise wide organization that ensures that data quality is managed data is governed that we have the effective practices applied to the entire lineage of the data ownership and curation is in place and everything else from a regulatory as well as opportunity perspective then is able to be leveraged upon so historically you know data has been viewed as sort of this expense it's it's big it's growing it needs to be managed deleted after a certain amount of time and then you know ten years ago of the Big Data move data became an asset you had a lot of shadow I people going off and doing things that maybe didn't comply to the corporate ethics probably drove here here you're a part of the organization crazy but talk about that how what has changed but they in the last you know five years or so just in terms of how people approach data oh I mean you know the story I tell my colleague who are all bankers obviously is the fact that the banker in 1989 had to mainly just know debits credits and be able to look someone in the eye and know whether or not they'd be a credit risk or not you know if we lend you money and you pay it back the the banker of the late 90s had to then contend with the emergence of technologies that made their lives easier and allowed for automation and processes to run much more smoothly um in the early two-thousands I would say that digitization was a big focus and in fact my previous role was head of digital banking and at the time we thought digital was the panacea it is the be-all and end-all it's the thing that's gonna make organizations edit lo and behold we realized that once you've gotten all your digital platforms ready they are just the plate or the pipe and nothing is flowing through it and there's no food on the face if data is not the main photo really um it's always been an asset I think organizations just never consciously knew that data was that okay so so what sounds like once you've made that sort of initial digital transformation you really had to work it and what we're hearing from a lot of practitioners like self as challenges related to that involve different parts of the organization different skill sets of challenges and sort of getting everybody to work together on the same page it's better but maybe you could take us back to sort of when you started on this initiative around data Ops what was that like what were some of the challenges that you faced and how'd you get through them okay first and foremost Dave organizations used to believe that data was I t's problem and that's probably why you you then saw the emergence of things like chatter IP but when you really acknowledge that data is an essay just like money is an asset then you you have to then take accountability for it just the same way as you would any other asset in the organization and you will not abdicate its management to a separate function that's not cold to the business and oftentimes IT are seen as a support or an enabling but not quite the main show in most organizations right so what we we then did is first emphasize that data is a business capability the business function it presides in business makes to product management makes to marketing makes to everything else that the business needs for data management also has to be for to every role in every function to different degrees and varying bearing offense and when you take accountability as an owner of a business unit you also take accountability for the data in the systems that support the business unit for us that was the first picture um and convincing my colleagues that data was their problem and not something that we had to worry about they just kind of leave us to to it was was also a journey but that was kind of the first step into it in terms of getting the data operations journey going um you had to first acknowledge please carry on no you just had to first acknowledge that it's something you must take accountability of as a banker not just need to a different part of the organization that's a real cultural mindset you know in the game of rock-paper-scissors you know culture kinda beats everything doesn't it it's almost like a yep a trump card and so so the businesses embrace that but but what did you do to support that is there has to be trust in the data that it has to be a timeliness and so maybe you could take us through how you achieve those objectives and maybe some other objectives that business the man so the one thing I didn't mention Dave is that obviously they didn't embrace it in the beginning it wasn't a it wasn't there oh yeah that make sense they do that type of conversation um what what he had was a few very strategic people with the right mindset that I could partner with that understood the case for data management and while we had that as as an in we developed a framework for a fully matured data operations capability in the organization and what that would look like in a target date scenario and then what you do is you wait for a good crisis so we had a little bit of a challenge in that our local regulator found us a little bit wanting in terms of our date of college and from that perspective it then brought the case for data quality management so now there's a burning platform you have an appetite for people to partner with you and say okay we need this to comply to help us out and when they start seeing their opt-in action do they then buy into into the concept so sometimes you need to just wait for a good Christ and leverage it and only do that which the organization will appreciate at that time you don't have to go Big Bang data quality management was the use case at the time five years ago so we focused all our energy on that and after that it gave us leeway and license really bring to maturity all the other capabilities at the business might not well understand as well so when that crisis hit of thinking about people process in technology you probably had to turn some knobs in each of those areas can you talk about that so from a technology perspective that that's when we partnered with with IBM to implement information analyzer for us in terms of making sure that then we could profile the data effectively what was important for us is to to make strides in terms of showing the organization progress but also being able to give them access to self-service tools that will give them insight into their data from a technology perspective that was kind of I think the the genesis of of us implementing and the IBM suite in earnest from a data management perspective people wise we really then also began a data stewardship journey in which we implemented business unit stewards of data I don't like using the word steward because in my organization it's taken lightly almost like a part-time occupation so we converted them we call them data managers and and the analogy I would give is every department with a P&L any department worth its salt has a FDA or financial director and if money is important to you you have somebody helping you take accountability and execute on your responsibilities in managing that that money so if data is equally important as an asset you will have a leader a manager helping you execute on your data ownership accountability and that was the people journey so firstly I had kind of soldiers planted in each department which were data managers that would then continue building the culture maturing the data practices as as applicable to each business unit use cases so what was important is that every manager in every business unit to the Data Manager focus their energy on making that business unit happy by ensuring that they data was of the right compliance level and the right quality the right best practices from a process and management perspective and was governed and then in terms of process really it's about spreading through the entire ecosystem data management as a practice and can be quite lonely um in the sense that unless the whole business of an organization is managing data they worried about doing what they do to make money and most people in most business units will be the only unicorn relative to everybody else who does what they do and so for us it was important to have a community of practice a process where all the data managers across business as well as the technology parts and the specialists who were data management professionals coming together and making sure that we we work together on on specific you say so I wonder if I can ask you so the the industry sort of likes to market this notion of of DevOps applied to data and data op have you applied that type of mindset approach agile of continuous improvement is I'm trying to understand how much is marketing and how much actually applicable in the real world can you share well you know when I was reflecting on this before this interview I realized that our very first use case of data officers probably when we implemented information analyzer in our business unit simply because it was the first time that IT and business as well as data professionals came together to spec the use case and then we would literally in an agile fashion with a multidisciplinary team come together to make sure that we got the outcomes that we required I mean for you to to firstly get a data quality management paradigm where we moved from 6% quality at some point from our client data now we're sitting at 99 percent and that 1% literally is just the timing issue to get from from 6 to 99 you have to make sure that the entire value chain is engaged so our business partners will the fundamental determinant of the business rules apply in terms of what does quality mean what are the criteria of quality and then what we do is translate that into what we put in the catalog and ensure that the profiling rules that we run are against those business rules that were defined at first so you'd have upfront determination of the outcome with business and then the team would go into an agile cycle of maybe two-week sprints where we develop certain things have stand-ups come together and then the output would be - boarded in a prototype in a fashion where business then gets to go double check that out so that was the first iterate and I would say we've become much more mature at it and we've got many more use cases now and there's actually one that it's quite exciting that we we recently achieved over the end of of 2019 into the beginning of this year so what we did was they I'm worried about the sunlight I mean through the window you look creative to me like sunset in South Africa we've been on the we've been on CubeSat sometimes it's so bright we have to put on sunglasses but so the most recent one which was in in mates 2019 coming in too early this year we we had long kind of achieved the the compliance and regulatory burning platform issues and now we are in a place of I think opportunity and luxury where we can now find use cases that are pertinent to business execution and business productivity um the one that comes to mind is we're a hundred and fifty eight years old as an organization right so so this Bank was born before technology it was also born in the days of light no no no integration because every branch was a standalone entity you'd have these big ledges that transactions were documented in and I think once every six months or so these Ledger's would be taken by horse-drawn carriage to a central place to get go reconcile between branches and paper but the point is if that is your legacy the initial kind of ERP implementations would have been focused on process efficiency based on old ways of accounting for transactions and allocating information so it was not optimized for the 21st century our architecture had has had huge legacy burden on it and so going into a place where you can be agile with data is something that we constantly working toward so we get to a place where we have hundreds of branches across the country and all of them obviously telling to client servicing clients as usual and and not being able for any person needing sales teams or executional teams they were not able in a short space of time to see the impact of the tactic from a database fee from a reporting history and we were in a place where in some cases based on how our Ledger's roll up and the reconciliation between various systems and accounts work it would take you six weeks to verify whether your technique were effective or not because to actually see the revenue hitting our our general ledger and our balance sheet might take that long that is an ineffective way to operate in a such a competitive environment so what you had our frontline sales agents literally manually documenting the sales that they had made but not being able to verify whether that or not is bringing revenue until six weeks later so what we did then is we sat down and defined all the requirements were reporting perspective and the objective was moved from six weeks latency to 24 hours um and even 24 hours is not perfect our ideal would be that bite rows of day you're able to see what you've done for that day but that's the next the next epoch that will go through however um we literally had the frontline teams defining what they'd want to see in a dashboard the business teams defining what the business rules behind the quality and the definitions would be and then we had an entire I'm analytics team and the data management team working around sourcing the data optimising and curating it and making sure that the latency had done that's I think only our latest use case for data art um and now we're in a place where people can look at a dashboard it's a cubed self-service they can learn at any time I see the sales they've made which is very important right now at the time of covert nineteen from a form of productivity and executional competitiveness those are two great use cases of women lying so the first one you know going from data quality 6% the 99% I mean 6% is all you do is spend time arguing about the data bills profanity and then 99% you're there and you said it's just basically a timing issue use latency in the timing and then the second one is is instead of paving the cow path with an outdated you know ledger Barret data process week you've now compressed that down to 24 hours you want to get the end of day so you've built in the agility into your data pipeline I'm going to ask you then so when gdpr hit were you able to very quickly leverage this capability and and apply and then maybe other of compliance edik as well well actually you know what we just now was post TDP our us um and and we got GDP all right about three years ago but literally all we got right was reporting for risk and compliance purposes they use cases that we have now are really around business opportunity lists so the risk so we prioritize compliance report a long time it but we're able to do real-time reporting from a single transaction perspective I'm suspicious transactions etc I'm two hours in Bank and our governor so from that perspective that was what was prioritize in the beginning which was the initial crisis so what you found is an entire engine geared towards making sure that data quality was correct for reporting and regulatory purposes but really that is not the be-all and end-all of it and if that's all we did I believe we really would not have succeeded or could have stayed dead we succeeded because Dana monetization is actually the penis' t the leveraging of data for business opportunity is is actually then what tells you whether you've got the right culture or not you're just doing it to comply then it means the hearts and minds of the rest of the business still aren't in the data game I love this story because it's me it's nirvana for so many years we've been pouring money to mitigate risk and you have no choice do it you know the general council signs off on it the the CFO but grudgingly signs off on it but it's got to be done but for years decades we've been waiting to use these these risk initiatives to actually drive business value you know it kind of happened with enterprise data warehouse but it was too slow it was complicated and it certainly didn't happen with with email archiving that was just sort of a tech balk it sounds like you know we're at that point today and I want to ask you I mean like you know you we talking earlier about you know the crisis gonna perpetuated this this cultural shift and you took advantage of that so we're out who we the the mother nature dealt up a crisis like we've never seen before how do you see your data infrastructure your data pipeline your data ops what kind of opportunities do you see in front of you today as a result of ovid 19 well I mean because of of the quality of kind data that we had now we were able to very quickly respond to to pivot nineteen in in our context where the government put us on lockdown relatively early in in the curve or in the cycle of infection and what it meant is it brought a little bit of a shock to the economy because small businesses all of a sudden didn't have a source of revenue or potentially three to six weeks and based on the data quality work that we did before it was actually relatively easy to be agile enough to do the things that we did so within the first weekend of of lockdown in South Africa we were the first bank to proactively and automatically offer small businesses and student and students with loans on our books a instant three month payment holiday assuming they were in good standing and we did that upfront though it was actually an opt-out process rather than you had to fall in and arrange for that to happen and I don't believe we would have been able to do that if our data quality was not with um we have since made many more initiatives to try and keep the economy going to try and keep our clients in in a state of of liquidity and so you know data quality at that point and that Dharma is critical to knowing who you're talking to who needs what and in which solutions would best be fitted towards various segments I think the second component is um you know working from home now brings an entirely different normal right so so if we had not been able to provide productivity dashboard and and and sales and dashboards to to management and all all the users that require it we would not be able to then validate or say what our productivity levels are now that people are working from home I mean we still have essential services workers that physically go into work but a lot of our relationship bankers are operating from home and that face the baseline and the foundation that we said productivity packing for various methods being able to be reported on in a short space of time has been really beneficial the next opportunity for us is we've been really good at doing this for the normal operational and front line and type of workers but knowledge workers have also know not necessarily been big productivity reporters historically they kind of get an output then the output might be six weeks down the line um but in a place where teams now are not locate co-located and work needs to flow in an edge of passion we need to start using the same foundation and and and data pipeline that we've laid down as a foundation for the reporting of knowledge work and agile team type of metric so in terms of developing new functionality and solutions there's a flow in a multidisciplinary team and how do those solutions get architected in a way where data assists in the flow of information so solutions can be optimally developed well it sounds like you're able to map a metric but business lines care about you know into these dashboards you usually the sort of data mapping approach if you will which makes it much more relevant for the business as you said before they own the data that's got to be a huge business benefit just in terms of again we talked about cultural we talked about speed but but the business impact of being able to do that it has to be pretty substantial it really really is um and and the use cases really are endless because every department finds their own opportunity to utilize in terms of their also I think the accountability factor has has significantly increased because as the owner of a specific domain of data you know that you're not only accountable to yourself and your own operation but people downstream to you as a product and in an outcome depend on you to ensure that the quality of the data you produces is of a high nature so so curation of data is a very important thing and business is really starting to understand that so you know the cards Department knows that they are the owners of card data right and you know the vehicle asset Department knows that they are the owners of vehicle they are linked to a client profile and all of that creates an ecosystem around the plan I mean when you come to a bank you you don't want to be known as a number and you don't want to be known just for one product you want to be known across everything that you do with that with that organization but most banks are not structured that way they still are product houses and product systems on which your data reside and if those don't act in concert then we come across extremely schizophrenic as if we don't know our clients and so that's very very important stupid like I can go on for an hour talking about this topic but unfortunately we're we're out of time thank you so much for sharing your deep knowledge and your story it's really an inspiring one and congratulations on all your success and I guess I'll leave it with you know what's next you gave us you know a glimpse of some of the things you wanted to do pressing some of the the elapsed times and the time cycle but but where do you see this going in the next you know kind of mid term and longer term currently I mean obviously AI is is a big is a big opportunity for all organizations and and you don't get automation of anything right if the foundations are not in place so you believe that this is a great foundation for anything AI to be applied in terms of the use cases that we can find the second one is really providing an API economy where certain data product can be shared with third parties I think that probably where we want to take things as well we are really utilizing external third-party data sources I'm in our data quality management suite to ensure validity of client identity and and and residents and things of that nature but going forward because been picked and banks and other organizations are probably going to partner to to be more competitive going forward we need to be able to provide data product that can then be leveraged by external parties and vice-versa to be like thanks again great having you thank you very much Dave appreciate the opportunity thank you for watching everybody that we go we are digging in the data ops we've got practitioners we've got influencers we've got experts we're going in the crowd chat it's the crowd chat net flash data ops but keep it right there way back but more coverage this is Dave Volante for the cube [Music] you
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Johannesburg | LOCATION | 0.99+ |
1989 | DATE | 0.99+ |
six weeks | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
two-week | QUANTITY | 0.99+ |
6% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
South Africa | LOCATION | 0.99+ |
less than 4,000 places | QUANTITY | 0.99+ |
99 percent | QUANTITY | 0.99+ |
Standard Bank | ORGANIZATION | 0.99+ |
99% | QUANTITY | 0.99+ |
21st century | DATE | 0.99+ |
6 | QUANTITY | 0.99+ |
second component | QUANTITY | 0.99+ |
hundreds of branches | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
first bank | QUANTITY | 0.99+ |
1% | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
99 | QUANTITY | 0.98+ |
each department | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
late 90s | DATE | 0.97+ |
six weeks later | DATE | 0.97+ |
today | DATE | 0.97+ |
three month | QUANTITY | 0.97+ |
ten years ago | DATE | 0.96+ |
an hour | QUANTITY | 0.96+ |
a hundred and fifty eight years old | QUANTITY | 0.96+ |
firstly | QUANTITY | 0.95+ |
second one | QUANTITY | 0.95+ |
first weekend | QUANTITY | 0.94+ |
one product | QUANTITY | 0.94+ |
nineteen | QUANTITY | 0.94+ |
first picture | QUANTITY | 0.93+ |
each business unit | QUANTITY | 0.91+ |
each | QUANTITY | 0.91+ |
Kumal | PERSON | 0.89+ |
single transaction | QUANTITY | 0.89+ |
Big Bang | EVENT | 0.88+ |
first one | QUANTITY | 0.88+ |
once every six months | QUANTITY | 0.87+ |
2020 | DATE | 0.86+ |
Ledger | ORGANIZATION | 0.85+ |
first use case | QUANTITY | 0.84+ |
every branch | QUANTITY | 0.83+ |
about three years ago | DATE | 0.82+ |
Christ | PERSON | 0.81+ |
one | QUANTITY | 0.8+ |
Itumeleng Monale | PERSON | 0.79+ |
DevOps | TITLE | 0.78+ |
two great use cases | QUANTITY | 0.78+ |
years | QUANTITY | 0.77+ |
Standard Bank of South | ORGANIZATION | 0.76+ |
Dharma | ORGANIZATION | 0.76+ |
early this year | DATE | 0.74+ |
l council | ORGANIZATION | 0.71+ |
FDA | ORGANIZATION | 0.7+ |
end | DATE | 0.69+ |
this year | DATE | 0.68+ |
Moore's Law | TITLE | 0.67+ |
IBM DataOps | ORGANIZATION | 0.65+ |
Dana | PERSON | 0.63+ |
every business | QUANTITY | 0.62+ |
Frank Slootman, Snowflake | CUBE Conversation, April 2020
(upbeat music) >> Narrator: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is theCUBE Coversation. >> All right everybody, this is Dave Vellante and welcome to this special CUBE Conversation. I first met Frank Slootman in 2007 when he was the CEO of Data Domain. Back then he was the CEO of a disruptive company and still is. Data Domain, believe or not back then, was actually replacing tape drives as the primary mechanism for backup. Yes, believe it or not, it used to be tape. Fast forward several years later, I met Frank again at VMworld when he had become the CEO of ServiceNow. At the time ServiceNow was a small company, about 100 plus million dollars. Frank and his team took that company to 1.2 billion. And Gartner, at the time of IPO said "you know, this doesn't make sense. "It's a small market, it's a very narrow help desk market, "it's maybe a couple billion dollars." The vision of Slootman and his team was to really expand the total available market and execute like a laser. Which they did and today, ServiceNow a very, very successful company. Snowflake first came into my line of sight in 2015 when SiliconANGLE wrote an article, "Why Snowflake is Better "Than Amazon Redshift, Re-imagining Data". Well last year Frank Slootman joined Snowflake, another disruptive company. And he's here today to talk about how Snowflake is really participating in this COVID-19 crisis. And I really want to share some of Frank's insights and leadership principles, Frank great to see you, thanks for coming on. >> Yeah, thanks for having us Dave. >> So when I first reported earlier this year on Snowflake and shared some data with the community, you reached back out to me and said "Dave, I want to just share with you. "I am not a playbook CEO, I am a situational CEO. "This is what I learned in the military." So Frank, this COVID-19 situation was thrown at you, it's a black swan, what was your first move as a leader? >> Well, my first move is let's not overreact. Take a deep breath. Let's really examine what we know. Let's not jump to conclusions, let's not try to project things that we're not capable of projecting. That's hard because we tend to have sort of levels of certainty about what's going to happen in the week, in the next month and so on and all of a sudden that's out of the window. It creates enormous anxiety with people. So in other words you got to sort of reset to okay, what do we know, what can we do, what do we control? And not let our minds sort of go out of control. So I talk to our people all the time about maintain a sense of normalcy, focus on the work, stay in the moment and by the way, turn the newsfeed off, right, because the hysteria you get fed through the media is really not helpful, right? So just cool down and focus on what we still can do. And then I think then everybody takes a deep breath and we just go back to work. I mean, we're in this mode now for three weeks and I can tell you, I'm on teleconferencing calls, whatever, eight, nine hours a day. Prospects, customers, all over the world. Pretty much what I was doing before except I'm not traveling right now. So it's not, >> Yeah, so it sounds clear-- >> Not that different than what it was before. (laughs) >> It sounds very Bill Belichickian, you know? >> Yeah. >> Focus on those things of which you can control. When you were running ServiceNow I really learned it from you and of course Mike Scarpelli, your then and current CFO about the importance of transparency. And I'm interested in how you're communicating, it sounds like you're doing some very similar things but have you changed the way in which you've communicated to your team, your internal employees at all? >> We're communicating much more. Because we can no longer rely on sort of running into people here, there and everywhere. So we have to be much more purposeful about communications. For example, I mean I send an email out to the entire company on Monday morning. And it's kind of a bunch of anecdotes. Just to bring the connection back, the normalcy. It just helps people get connected back to the mothership and like well, things are still going on. We're still talking in the way we always used to be. And that really helps and I also, I check in with people a lot more, I ask all of our leadership to constantly check in with people because you can't assume that everybody is okay, you can't be out of sight, out of mind. So we need to be more purposeful in reaching out and communicating with people than we were previously. >> And a lot of people obviously concerned about their jobs. Have you sort of communicated, what have you communicated to employees about layoffs? I mean, you guys just did a large raise just before all this, your timing was kind of impeccable. But what have you communicated in that regard? >> I've said, there's no layoffs on our radar, number one. Number two, we are hiring. And number three is we have a higher level of scrutiny on the hires that we're making. And I am very transparent. In other words I tell people look, I prioritize the roles that are closest to the direct train of the business. Right, it's kind of common sense. But I wanted to make sure that this is how we're thinking about it. There are some roles that are more postponable than others. I'm hiring in engineering without any reservation because that is the long term strategic interest of the company. One the sales side, I want to know that sales leaders know how to convert to yields, that we're not just sort of bringing capacity online. And the leadership is not convinced or confident that they can convert to yield. So there's a little bit finer level of scrutiny on the hiring. But by and large, it's not that different. There's this saying out there that we should suspend all non-essential spending and hiring, I'm like you should always do that. Right? I mean what's different today? (both laugh) If it's non-essential, why do it, right? So all of this comes back to this is probably how we should operate anyways, yep. >> I want to talk a little bit about the tech behind Snowflake. I'm very sensitive when CEOs come on my program to make sure that we're not, I'm not trying to bait CEOs into ambulance chasing, that's not what it's about. But I do want to share with our community kind of what's new, what's changed and how companies like Snowflake are participating in this crisis. And in particular, we've been reporting for awhile, if you guys bring up that first slide. That the innovation in the industry is really no longer about Moore's Law. It's really shifted. There's a new, what we call an innovation cocktail in the business and we've collected all this data over the last 10 years. With Hadoop and other distributed data and now we have Edge Data, et cetera, there's this huge trove of data. And now AI is becoming real, it's becoming much more economical. So applying machine intelligence to this data and then the Cloud allows us to do this at scale. It allows us to bring in more data sources. It brings an agility in. So I wonder if you could talk about sort of this premise and how you guys fit. >> Yeah, I would start off by reordering the sequence and saying Cloud's number one. That is foundational. That helps us bring scale to data that we never had to number two, it helps us bring computational power to data at levels we've never had before. And that just means that queries and workloads can complete orders of magnitude faster than they ever could before. And that introduces concepts like the time value of data, right? The faster you get it, the more impactful and powerful it is. I do agree, I view AI as sort of the next generation of analytics. Instead of using data to inform people, we're using data to drive processes and businesses directly, right? So I'm agreeing obviously with these strengths because we're the principal beneficiaries and drivers of these platforms. >> Well when we talked about earlier this year about Snowflake, we really brought up the notion that you guys were one of the first if not the first. And guys, bring back Frank, I got to see him. (Frank chuckles) One of the first to really sort of separate the notion of being able to scale, compute independent of storage. And that brought not only economics but it brought flexibility. So you've got this Cloud-native database. Again, what caught my attention in that Redshift article we wrote is essentially for our audience, Redshift was based on ParAccel. Amazon did a great job of really sort of making that a Cloud database but it really wasn't born in the Cloud and that's sort of the advantage of Snowflake. So that architectural approach is starting to really take hold. So I want to give an example. Guys if you bring up the next chart. This is an example of a system that I've been using since early January when I saw this COVID come out. Somebody texted me this. And it's the Johns Hopkins dataset, it's awesome. It shows you, go around the map, you can follow it, it's pretty close to real time. And it's quite good. But the problem is, all right thank you guys. The problem is that when I started to look at, I wanted to get into sort of a more granular view of the counties. And I couldn't do that. So guys bring up the next slide if you would. So what I did was I searched around and I found a New York Times GitHub data instance. And you can see it in the top left here. And basically it was a CSV. And notice what it says, it says we can't make this file beautiful and searchable because it's essentially too big. And then I ran into what you guys are doing with Star Schema, Star Schema's a data company. And essentially you guys made the notion that look, the Johns Hopkins dataset as great as it is it's not sort of ready for analytics, it's got to be cleaned, et cetera. And so I want you to talk about that a little bit. Guys, if you could bring Frank back. And share with us what you guys have done with Star Schema and how that's helping understand COVID-19 and its progression. >> Yeah, one of the really cool concepts I've felt about Snowflake is what we call the data sharing architecture. And what that really means is that if you and I both have Snowflake accounts, even though we work for different institutions, we can share data optics, tables, schema, whatever they are with each other. And you can process against that in place if they are residing in a local, to your own platform. We have taken that concept from private also to public. So that data providers like Star Schema can list their datasets, because they're a data company, so obviously it's in their business interest to allow this data to be profiled and to be accessible by the Snowflake community. And this data is what we call analytics ready. It is instantly accessible. It is also continually updated, you have to do nothing. It's augmented with incremental data and then our Snowflake users can just combine this data with supply chain, with economic data, with internal operating data and so on. And we got a very strong reaction from our customer base because they're like "man, you're saving us weeks "if not months just getting prepared to start to do an al, let alone doing them." Right? Because the data is analytics ready and they have to do literally nothing. I mean in other words if they ask us for it in the morning, in the afternoon they'll be running workloads again. Right, and then combining it with their own data. >> Yeah, so I should point out that that New York Times GitHub dataset that I showed you, it's a couple of days behind. We're talking here about near realtime, or as close as realtime as you can get, is that right? >> Yep. Yeah, every day it gets updated. >> So the other thing, one of the things we've been reporting, and Frank I wondered if you could comment on this, is this new emerging workloads in the Cloud. We've been reporting on this for a couple of years. The first generation of Cloud was IS, was really about compute, storage, some database infrastructure. But really now what we're seeing is these analytic data stores where the valuable data is sitting and much of it is in the Cloud and bringing machine intelligence and data science capabilities to that, to allow for this realtime or near realtime analysis. And that is a new, emerging workload that is really gaining a lot of steam as these companies try to go to this so-called digital transformation. Your comments on that. >> Yeah, we refer to that as the emergence or the rise of the data Cloud. If you look at the Cloud landscape, we're all very familiar with the infrastructure clouds. AWS and Azure and GCP and so on, it's just massive storage and servers. And obviously there's data locked in to those infrastructure clouds as well. We've been familiar for it for 10, 20 years now with application clouds, notably Salesforce but obviously Workday, ServiceNow, SAP and so on, they also have data in them, right? But now you're seeing that people are unsiloing the data. This is super important. Because as long as the data is locked in these infrastructure clouds, in these application clouds, we can't do the things that we need to do with it, right? We have to unsilo it to allow the scale of querying and execution against that data. And you don't see that any more clear that you do right now during this meltdown that we're experiencing. >> Okay so I learned long ago Frank not to argue with you but I want to push you on something. (Frank laughs) So I'm not trying to be argumentative. But one of those silos is on-prem. I've heard you talk about "look, we're a Cloud company. "We're Cloud first, we're Cloud only. "We're not going to do an on-prem version." But some of that data lives on-prem. There are companies out there that are saying "hey, we separate compute and storage too, "we run in the Cloud. "But we also run on-prem, that's our big differentiator." Your thoughts on that. >> Yeah, we burnt the ship behind us. Okay, we're not doing this endless hedging that people have done for 20 years, sort of keeping a leg in both worlds. Forget it, this will only work in the public Cloud. Because this is how the utility model works, right? I think everybody is coming to this realization, right? I mean excuses are running out at this point. We think that it'll, people will come to the public Cloud a lot sooner than we will ever come to the private Cloud. It's not that we can't run on a private cloud, it just diminishes the potential and the value that we bring. >> So as sort of mentioned in my intro, you have always been at the forefront of disruption. And you think about digital transformation. You know Frank we go to all of these events, it used to be physical and now we're doing theCUBE digital. And so everybody talks about digital transformation. CEOs get up, they talk about how they're helping their customers move to digital. But the reality is is when you actually talk to businesses, there was a lot of complacency. "Hey, this isn't really going to happen in my lifetime" or "we're doing pretty well." Or maybe the CEO might be committed but it doesn't necessarily trickle down to the P&L managers who have an update. One of the things that we've been talking about is COVID-19 is going to accelerate that digital transformation and make it a mandate. You're seeing it obviously in retail play out and a number of other industries, supply chains are, this has wreaked havoc on supply chains. And so there's going to be a rethinking. What are your thoughts on the acceleration of digital transformation? >> Well obviously the crisis that we're experiencing is obviously an enormous catalyst for digital transformation and everything that that entails. And what that means and I think as a industry we're just victims of inertia. Right, I mean haven't understood for 20 years why education, both K through 12 but also higher ed, why they're so brick and mortar bound and the way they're doing things, right? And we could massively scale and drop the cost of education by going digital. Now we're forced into it and everybody's like "wow, "this is not bad." You're right, it isn't, right but we haven't so the economics, the economic imperative hasn't really set in but it is now. So these are all great things. Having said that, there are also limits to digital transformation. And I'm sort of experiencing that right now, being on video calls all day. And oftentimes people I've never met before, right? There's still a barrier there, right? It's not like digital can replace absolutely everything. And that is just not true, right? I mean there's some level of filter that just doesn't happen when you're digital. So there's still a need for people to be in the same place. I don't want to sort of over rotate on this concept, that like okay, from here on out we're all going to be on the wires, that's not the way it will be. >> Yeah, be balanced. So earlier you made a comment, that "we should never "be spending on non-essential items". And so you've seen (Frank laughs) back in 2008 you saw the Rest in Peace good times, you've seen the black swan memos that go out. I assume that, I mean you're a very successful investor as well, you've done a couple of stints in the VC community. What are you seeing in the Valley in regard to investments, will investments continue, will we continue to feed innovation, what's your sense of that? Well this is another wake up call. Because in Silicon Valley there's way too much money. There's certainly a lot of ideas but there's not a lot of people that can execute on it. So what happens is a lot of things get funded and the execution is either no good or it's just not a valid opportunity. And when you go through a downturn like this you're finding out that those businesses are not going to make it. I mean when the tide is running out, only the strongest players are going to survive that. It's almost a natural selection process that happens from time to time. It's not necessarily a bad thing because people get reallocated. I mean Silicon Valley is basically one giant beehive, right? I mean we're constantly repurposing money and people and talent and so on. And that's actually good because if an idea is not worth in investing in, let's not do it. Let's repurpose those resources in places where it has merit, where it has viability. >> Well Frank, I want to thank you for coming on. Look, I mean you don't have to do this. You could've retired long, long ago but having leaders like you in place in these times of crisis, but even when in good times to lead companies, inspire people. And we really appreciate what you do for companies, for your employees, for your customers and certainly for our community, so thanks again, I really appreciate it. >> Happy to do it, thanks Dave. >> All right and thank you for watching everybody, Dave Vellante for theCUBE, we will see you next time. (upbeat music)
SUMMARY :
this is theCUBE Coversation. And I really want to share some of Frank's insights and said "Dave, I want to just share with you. So in other words you got to sort of reset to okay, Not that different than what it was before. I really learned it from you and of course Mike Scarpelli, I ask all of our leadership to constantly check in But what have you communicated in that regard? So all of this comes back to this is probably how and how you guys fit. And that just means that queries and workloads And then I ran into what you guys are doing And what that really means is that if you and I or as close as realtime as you can get, is that right? Yeah, every day it gets updated. and much of it is in the Cloud And you don't see that any more clear that you do right now Okay so I learned long ago Frank not to argue with you and the value that we bring. But the reality is is when you actually talk And I'm sort of experiencing that right now, And when you go through a downturn like this And we really appreciate what you do for companies, Dave Vellante for theCUBE, we will see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Frank | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Slootman | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Bill Belichickian | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
April 2020 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Monday morning | DATE | 0.99+ |
1.2 billion | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Star Schema | ORGANIZATION | 0.99+ |
early January | DATE | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
first move | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
both | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
VMworld | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
about 100 plus million dollars | QUANTITY | 0.98+ |
earlier this year | DATE | 0.98+ |
theCUBE Studios | ORGANIZATION | 0.98+ |
first slide | QUANTITY | 0.98+ |
several years later | DATE | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.98+ |
both worlds | QUANTITY | 0.98+ |
playbook | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
next month | DATE | 0.97+ |
New York Times | ORGANIZATION | 0.97+ |
GitHub | ORGANIZATION | 0.97+ |
first generation | QUANTITY | 0.96+ |
nine hours a day | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
12 | QUANTITY | 0.95+ |
Johns Hopkins | ORGANIZATION | 0.94+ |
Daphne Koller, insitro | WiDS Women in Data Science Conference 2020
live from Stanford University it's the hue covering Stanford women in data science 2020 brought to you by Silicon angle media hi and welcome to the cube I'm your host Sonia - Garrett and we're live at Stanford University covering wigs women in data science conference the fifth annual one and joining us today is Daphne Koller who is the co-founder who sari is the CEO and founder of in seat row that Daphne welcome to the cube nice to be here Sonia thank you for having me so tell us a little bit about in seat row how you how it you got it founded and more about your role so I've been working in the intersection of machine learning and biology and health for quite a while and it was always a bit of a an interesting journey in that the data sets were quite small and limited we're now in a different world where there's tools that are allowing us to create massive biological data sets that I think can help us solve really significant societal problems and one of those problems that I think is really important is drug discovery development where despite many important advancements the costs just keep going up and up and up and the question is can we use machine learning to solve that problem better and you talk about this more in your keynote so give us a few highlights of what you talked about so in the last you can think of drug discovery and development in the last 50 to 70 years as being a bit of a glass half-full glass half-empty the glass half-full is the fact that there's diseases that used to be a death sentence or of the sentence still a life long of pain and suffering that are now addressed by some of the modern-day medicines and I think that's absolutely amazing the other side of it is that the cost of developing new drugs has been growing exponentially in what's come to be known as Arun was law being the inverse of Moore's Law which is the one we're all familiar with because the number of drugs approved per billion u.s. dollars just keeps going down exponentially so the question is can we change that curve and you talked in your keynote about the interdisciplinary cold to tell us more about that I think in order to address some of the critical problems that were facing one needs to really build a culture of people who work together at from different disciplines each bringing their own insights and their own ideas into the mix so and in seat row we actually have a company that's half-life scientists many of whom are producing data for the purpose of driving machine learning models and the other half are machine learning people and data scientists who are working on those but it's not a handoff where one group produces the data and the other one consumes and interpreted but really they start from the very beginning to understand what are the problems that one could solve together how do you design the experiment how do you build the model and how do you derive insights from that that can help us make better medicines for people and I also wanted to ask you you co-founded Coursera so tell us a little bit more about that platform so I founded Coursera as a result of work that I'd been doing at Stanford working on how technology can make education better and more accessible this was a project that I did here a number of my colleagues as well and at some point in the fall of 2011 there was an experiment let's take some of the content that we've been we've been developing within it's within Stanford and put it out there for people to just benefit from and we didn't know what would happen would it be a few thousand people but within a matter of weeks with minimal advertising other than one New York Times article that went viral we had a hundred thousand people in each of those courses and that was a moment in time where you know we looked at this and said can we just go back to writing more papers or is there an incredible opportunity to transform access to education to people all over the world and so I ended up taking a what was supposed to be a teary leave of absence from Stanford to go and co-found Coursera and I thought I'd go back after two years but the but at the end of that two-year period the there was just so much more to be done and so much more impact that we could bring to people all over the world people of both genders people of the different social economic status every single country around the world we I just felt like this was something that I couldn't not do and how did you why did you decide to go from an educational platform to then going into machine learning and biomedicine so I've been doing Coursera for about five years in 2016 and the company was on a great trajectory but it's primarily a Content company and around me machine learning was transforming the world and I wanted to come back and be part of that and when I looked around I saw machine learning being applied to ecommerce and the natural language and to self-driving cars but there really wasn't a lot of impact being made on the life science area and I wanted to be part of making that happen partly because I felt like coming back to our earlier comment that in order to really have that impact you need to have someone who speaks both languages and while there's a new generation of researchers who are bilingual in biology and in machine learning there's still a small group and there very few of those in kind of my age cohort and I thought that I would be able to have a real impact by building and company in the space so it sounds like your background is pretty varied what advice would you give to women who are just starting college now who may be interested in a similar field would you tell them they have to major in math or or do you think that maybe like there are some other majors that may be influential as well I think there's a lot of ways to get into data science math is one of them but there's also statistics or physics and I would say that especially for the field that I'm currently in which is at the intersection of machine learning data science on the one hand and biology and health on the other one can get there from biology or medicine as well but what I think is important is not to shy away from the more mathematically oriented courses in whatever major you're in because that found the is a really strong one there's a lot of people out there who are basically lightweight consumers of data science and they don't really understand how the methods that they're deploying how they work and that limits them in their ability to advance the field and come up with new methods that are better suited perhaps to the problems that they're tackling so I think it's totally fine and in fact there's a lot of value to coming into data science from fields other than a third computer science but I think taking courses in those fields even while you're majoring in whatever field you're interested in is going to make you a much better person who lives at that intersection and how do you think having a technology background has helped you in in founding your companies and has helped you become a successful CEO in companies that are very strongly Rd focused like like in C tro and others having a technical co-founder is absolutely essential because it's fine to have an understanding of whatever the user needs and so on and come from the business side of it and a lot of companies have a business co-founder but not understanding what the technology can actually do is highly limiting because you end up hallucinating oh if we could only do this and yet that would be great but you can't and people end up oftentimes making ridiculous promises about what technology will or will not do because they just don't understand where the land mines sit and and where you're gonna hit real obstacles and in the path so I think it's really important to have a strong technical foundation in these companies and that being said where do you see an teacher in the future and and how do you see it solving say Nash that you talked about in your keynote so we hope that in seat row we'll be a fully integrated drug discovery and development company that is based on a slightly different foundation than a traditional pharma company where they grew up in the old approach of that is very much bespoke scientific analysis of the biology of different diseases and then going after targets or our ways of dealing with the disease that are driven by human intuition where I think we have the opportunity to go today is to build a very data-driven approach that collects massive amounts of data and then let analysis of those data really reveal new hypotheses that might not be the ones that the cord with people's preconceptions of what matters and what doesn't and so hopefully we'll be able to over time create enough data and apply machine learning to address key bottlenecks in the drug discovery development process so we can bring better drugs to people and we can do it faster and hopefully at much lower cost that's great and you also mentioned in your keynote that you think that 2020s is like a digital biology era so tell us more about that so I think if you look if you take a historical perspective on science and think back you realize that there's periods in history where one discipline has made a tremendous amount of progress in a relatively short amount of time because of a new technology or a new way of looking at things in the 1870s that discipline was chemistry was the understanding of the periodic table and that you actually couldn't turn lead into gold in the 1900s that was physics with understanding the connection between matter and energy and between space and time in the 1950s that was computing where silicon chips were suddenly able to perform calculations that up until that point only people have been able to do and then in 1990s there was an interesting bifurcation one was the era of data which is related to computing but also involves elements statistics and optimization of neuroscience and the other one was quantitative biology in which biology moved from a descriptive science of techsan amaizing phenomena to really probing and measuring biology in a very detailed and a high-throughput way using techniques like microarrays that measure the activity of 20,000 genes at once Oh the human genome sequencing of the human genome and many others but these two feels kind of evolved in parallel and what I think is coming now 30 years later is the convergence of those two fields into one field that I like to think of as digital biology where we are able using the tools that have and continue to be developed measure biology in entirely new levels of detail of fidelity of scale we can use the techniques of machine learning and data science to interpret what we're seeing and then use some of the technologies that are also emerging to engineer biology to do things that it otherwise wouldn't do and that will have implications in biomaterials in energy in the environment in agriculture and I think also in human health and it's an incredibly exciting space to be in right now because just so much is happening and the opportunities to make a difference and make the world a better place are just so large that sounds awesome Daphne thank you for your insight and thank you for being on cute thank you I'm so neat agario thanks for watching stay tuned for more great
SUMMARY :
in the last you can think of drug
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daphne Koller | PERSON | 0.99+ |
Sonia | PERSON | 0.99+ |
Daphne | PERSON | 0.99+ |
1950s | DATE | 0.99+ |
1990s | DATE | 0.99+ |
Sonia - Garrett | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
20,000 genes | QUANTITY | 0.99+ |
1900s | DATE | 0.99+ |
1870s | DATE | 0.99+ |
two fields | QUANTITY | 0.99+ |
one field | QUANTITY | 0.99+ |
Stanford University | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
Coursera | ORGANIZATION | 0.98+ |
2020s | DATE | 0.98+ |
both languages | QUANTITY | 0.98+ |
both genders | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
fall of 2011 | DATE | 0.98+ |
two-year | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
about five years | QUANTITY | 0.96+ |
30 years later | DATE | 0.93+ |
every single country | QUANTITY | 0.93+ |
WiDS Women in Data Science Conference 2020 | EVENT | 0.93+ |
one | QUANTITY | 0.91+ |
one discipline | QUANTITY | 0.9+ |
a hundred thousand people | QUANTITY | 0.9+ |
Nash | PERSON | 0.89+ |
sari | PERSON | 0.89+ |
each | QUANTITY | 0.84+ |
Silicon angle media | ORGANIZATION | 0.83+ |
few thousand people | QUANTITY | 0.83+ |
billion u.s. dollars | QUANTITY | 0.83+ |
two years | QUANTITY | 0.82+ |
New York Times | ORGANIZATION | 0.8+ |
one of those problems | QUANTITY | 0.79+ |
Moore's Law | TITLE | 0.79+ |
one group | QUANTITY | 0.79+ |
Coursera | TITLE | 0.78+ |
2020 | DATE | 0.77+ |
70 years | QUANTITY | 0.76+ |
third computer | QUANTITY | 0.74+ |
fifth annual one | QUANTITY | 0.68+ |
each of those courses | QUANTITY | 0.68+ |
science | EVENT | 0.68+ |
lot of people | QUANTITY | 0.66+ |
half | QUANTITY | 0.64+ |
per | QUANTITY | 0.49+ |
last 50 | DATE | 0.46+ |
Arun | TITLE | 0.4+ |
Rishi Bhargava, Palo Alto Networks | RSAC USA 2020
>>from San Francisco. It's the queue covering our essay conference. 2020. San Francisco Brought to you by Silicon Angle Media's >>Welcome Back Around Here at the Cube. Coverage for our conference. Mosconi, South Floor. Bring you all the action day one of three days of cube coverage where the security game is changing, the big players are making big announcements. The market's changing from on premise to cloud. Then hybrid Multi cloud was seeing that wave coming. A great guest here. Barr, our VP of product strategy and co founder of the Mystery, was acquired by Palo Alto Networks. Worries employed now, Rishi. Thanks for coming on. Thank you. Absolutely happy to be here. So, first of all, great journey for your company. Closed a year ago. Half a 1,000,000,000. Roughly give or take 60. Congratulations. Thank you. Big accomplishments. You guys were taken out right in the growth phase. Now at Palo Alto Networks, which we've been following, you know, very careful. You got a new CMO over there, Jean English? No, we're very well. We're very bullish on Palo Alto. Even though that the on premise transitions happening cloud. You guys are well positioned. How's things going things are going fantastic. We're investing a lot in the next Gen security business across the board, as mentioned Prisma Cloud is big business. And then on the other side, which is what I'm part of the cortex family focused on the Security operations center and the efficiencies That's fantastic and, ah, lot off product innovations, investment and the customer pull from an operations perspective. So very excited. You guys had a big announcement on Monday, and then yesterday was the earnings, which really kind of points to the trend that we're seeing, which is the wave to the cloud, which you're well positioned for this transition going on. I want to get to the news first. Then we get into some of the macro industry questions you guys announced the X ore, which is redefining orchestration. Yes. What is this about? What's this news about? Tell us. >> So this news is about Mr was acquired about a year ago as well. This is taking that Mr Platform and expanding it on, expanding it to include a very core piece, which is Intel management. If you look at a traditional saw, what has happened is soccer teams have had the same dead and over the last few years acquired a sword platform such as a mystery security orchestration, automation and response platform. But the Edge Intel team has always been still separate the threat Intel feeds that came in with separate. With this, we are expanding the power of automation and applying doc to the threat intelligence as well. That is, thread intelligence, current state of the art right now. So the current state of the art of threat intelligence is are the larger organizations typically subscribe to a lot of faith, feeds open source feeds and aggregate them. But the challenge is to aggregate them the sit in a repository and nobody knows what to do with them. So the operationalization of those feeds is completely missing. >> So basically, that is going to have data pile. Corpus is sitting there. No one touches it, and then everyone has to. It's a heavy lift. It's a heavy lift, and nobody knows. Cisco sees the value coming out of it. How do you proactively hunt using those? How do you put them to protecting proactively to explain cortex X, or what is it? And what's the value? So the cortex X or as a platform. There are four core pieces, three off which for the core tenants of the misto since the big one is automation and orchestration. So today we roughly integrate with close to 400 different products security and I t products. Why are the FBI on let customers build these work flows come out of the box with close to 80 or 90 different workloads. The idea of these workloads is being able to connect to one product for the data go to another taken action there Automation, orchestration builds a visual book second s case management and this is very critical, right? I mean, if you look at the process side of security, we have never focused as an industry and the process and the human side of security. So how do you make sure every security alert on the process the case management escalation sl A's are all managed. So that's a second piece off cortex. Third collaboration. One of the core tenants of Mr Waas. We heard from customers that analysts do not talk to each other effectively on when they do. Nobody captures that knowledge. So the misto has an inbuilt boardroom which now Cortex X or has the collaboration war room on that is now available to be able to chat among analysts. But not only that charged with the board take actions. The fourth piece, which is the new expanded platform, is the personal management to be able to now use the power of orchestration, automation collaboration, all for threat intelligence feeds as well. Not only the alerts >> so and so you're adding in the threat. Intelligence feeds, yes. So is that visualize ai on the machine Learning on that? How is that being process in real time? How does that on demand work for that fills. So the biggest piece is applying the automation and intelligence to automatically score that on being able to customize the scoring the customer's needs. Customized confidence score perfect. And once you have the high fidelity indicators automatically go block them as an example. If you get a very high fidelity IOC from FBI that this particular domain is the militias domain, you would want to block that in. Your firewall is executed immediately, and that is not happening today. That is the core, and that's because of the constraint is I don't know the data the way we don't know the data and it's manual. Some human needs to review it. Some human needs to go just not being surfaced, just not. So let's get back into some of the human piece. I love the collaboration piece. One of things that I hear all the time in my cube interviews across all the hundreds of events we go to is the human component you mentioned. Yes, people have burnt out. I mean, like the security guys. I mean, the joke was CIOs have good days once in a while, CSOs don't have any good days, and it's kind of a job board pejorative to that. But that's the reality. Is that it works? Yes. We actually okay, if you have another job. Talking of jokes, we have this. Which is what do you call and overwork security analyst. A security analyst, because every one of them >>is over word. >>So this is a huge thing. So, like the ai and some of the predictive analytics trend Is tourist personalization towards the analyst Exactly. This is a trend that we're seeing. What's your view on this? What? You're absolutely We're seeing that trend which is How do you make sure analyst gets to see the data they're supposed to see at the right time? Right. So there's one aspect is what do you bring up to the analyst? What is relevant and you bring it up at the right time to be able to use it. Respond with that. So that comes in one from an ML perspective and machine learning. And our cortex. XDR suite of products actually does a fantastic job of bringing very rich data to the analyst at the right time. And then the second is, can we help analyst respond to it? Can we take the repetitive work away from them with a playbook approach? And that's what the cortex platform brings to that. I love to riff on some future scenarios kind of. I won't say sci fi, but I got to roll a little bit of a future to me. I think security has to get to like a multi player gaming environment because imagine like a first person shooter game, you know where or a collaborative game where it's fun. Because once you start that collaboration, yes, then you're gonna have some are oi around. I saw that already. Don't waste your time or you get to know people. So sharing has been a big part? Yes. How soon do you think we're gonna get to an environment where I won't say like gaming? But that notion of a headset on I got some data. I know you are your reputation. I think your armor, you're you're certifications. Metaphorically putting. I think way have a lot of these aspects and I think it's a very critical point. You mentioned right one of the things which we call the virtual war room and like sex or I was pointing out the fact that you can have analysts sit in front of a collaboration war room not only charge for the appears but charged with a boat to go take care of. This is equivalent to remember that matrix movie plugging and says, you know how to fly this helicopter data and now I do. That's exactly what it is. I think we need to point move to a point where, no matter what the security tool is what your endpoint is, you should not have to learn every endpoint every time the normalization off, running those commands via the collaboration War Room should be dead. I would say we're starting to see in some of the customers are topics or they're using the collaboration war room to run those commands intractably, I would say, though, there's a big challenge. Security vendors do not do a good job normalizing that data, and that is where we're trying to reach you. First of all, you get the award for bringing up a matrix quote in The Cube interview. So props to that. So you have blue teams. Red teams picked the pill. I mean, people are people picking their teams. You know what's what's going on. How do you see the whole Red Team Blue team thing happening? I think that's a really good stuff happening. In my opinion, John, what's going on is right now so far, if you see if I go back three years our adversaries were are committing. Then we started to see this trend off red teaming automation with beach automation and bunch of companies starting to >>do that >>with Cortex X or on similar products, we're starting to now automate the blue team side of things, which is how do you automatically respond how do you protect yourself? How do you put the response framework back there? I think the next day and I'm starting to see is these things coming together into a unified platform where the blue team and the team are part of the same umbrella. They're sharing the data. They're sharing the information on the threat Intel chair. So I see we are a very, very good part. Of course, the adversities. I'm not gonna sit idle like you said about the Dev ops mindset. Heavens, notion of knowledge coming your way and having sharing packages all baked out for you. She doesn't do the heavy lifting. That's really the problem. The data is a problem. So much demand so much off it. And you don't know what is good and what is not. Great. Great conversation again. The Matrix reference about your journey. You've been an entrepreneur and sold. You had a great exit again. Politics is world class blue chip company in the industry public going through a transition. What's it like from an entrepreneur now to the big company? What's the opportunity is amazing. I think journey has been very quick. One. We saw some crazy growth with the misto on. Even after the acquisition, it's been incredibly fast pace. It's very interesting lot of one of the doctors like, Hey, you must be no resting is like, No, the journey is amazing. I think he s Polito Networks fundamentally believe that. We need to know where it really, really fast to keep the adversaries out on. But that's been the journey. Um, and we have accelerated, in fact, some of our product plans that we hard as a start up on delivering much faster. So the journey has been incredible, and we have been seeing that growth Will they picked you guys write up? There's no vesting interesting going on when you guys were on the uphill on the upslope growth and certainly relevance for Palo Alto. So clearly, you know, you haven't fun. People vested arrest when they checked out, You guys look like you're doing good. So I got to ask you the question that when you started, what was the original mission? Where is it now? I mean this Is there any deviation? What's been the kind? Of course you know, this is very, very relevant questions. It's very interesting. Right after the acquisition, we went and looked at a pitch deck, which we presented overseas in mid 2015. Believe it or not, the mission has not changed, not changing iron. It had the same competent off. How do you make the life off a security person? A security analyst? Easy. It's all the same mission by automating more by applying AI and learning to help them further by letting them collaborate. All the aspects off case management process, collaboration, automation. It's not changed. That's actually very powerful, because if you're on the same mission, of course you're adding more and more capabilities. But we're still on the same path on going on that. So every company's got their own little nuanced. Moore's Law for Intel. What made you guys successful was that the culture of Dev ops? It sounds like you guys had a certain either it was cut in grain. I think I would say, by the way, making things easy. But you got to do it. You got to stay the course. What was that? I think that's a fundamental cultural feature. Yeah, there's one thing really stand by, and I actually tweeted about a few weeks ago, this which is every idea, is as good as good as its execution. So there's two things between really focus on which is customer focused on. We were really, really portable about customer needs to get the product needs to use the product, customer focus and execution. As we heard the customers loud and clear, every small better. And that's what we also did. You guys have this agile mindset as well, absolutely agile mindset and the development that comes with the customer focus because way kind of these micro payments customer wants this like, why do they want this? What is the end goal? Attributed learner. Move on to make a decision making line was on Web services Way debate argue align! Go Then go. And then once you said we see great success story again Startup right out of the gate 2015. Acquire a couple years later, conventions you and your team and looking forward to seeing your next Palo Alto Networks event. Or thanks for coming on. Great insight here on the cube coverage. I'm John Furrier here on the ground floor of our S e commerce on Mosconi getting all the signal extracting it from the noise here on the Cube. Thanks for watching. >>Yeah, yeah,
SUMMARY :
San Francisco Brought to you by Silicon Angle Then we get into some of the macro industry questions you guys announced the X ore, But the challenge is to aggregate them the sit in a repository and nobody knows what to do with them. So the misto has an inbuilt boardroom which now Cortex So the biggest piece is applying the automation and intelligence to automatically You're absolutely We're seeing that trend which is How do you make So I got to ask you the question that when you started, what was the original mission?
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Jean English | PERSON | 0.99+ |
Barr | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
fourth piece | QUANTITY | 0.99+ |
Waas | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
Rishi | PERSON | 0.99+ |
mid 2015 | DATE | 0.99+ |
one aspect | QUANTITY | 0.99+ |
Rishi Bhargava | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
one product | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Polito Networks | ORGANIZATION | 0.98+ |
three days | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
Prisma Cloud | ORGANIZATION | 0.97+ |
four core pieces | QUANTITY | 0.97+ |
a year ago | DATE | 0.96+ |
next day | DATE | 0.94+ |
First | QUANTITY | 0.93+ |
Half a 1,000,000,000 | QUANTITY | 0.91+ |
Mosconi, South Floor | LOCATION | 0.9+ |
90 different workloads | QUANTITY | 0.9+ |
Cortex X | TITLE | 0.9+ |
Third collaboration | QUANTITY | 0.89+ |
few weeks ago | DATE | 0.89+ |
a couple years later | DATE | 0.89+ |
three years | QUANTITY | 0.88+ |
first | QUANTITY | 0.88+ |
Mosconi | LOCATION | 0.88+ |
cortex X | OTHER | 0.85+ |
hundreds of events | QUANTITY | 0.85+ |
The Matrix | TITLE | 0.84+ |
The Cube | TITLE | 0.83+ |
Mystery | ORGANIZATION | 0.83+ |
close to 400 different products | QUANTITY | 0.81+ |
Cortex | TITLE | 0.81+ |
Moore's | TITLE | 0.8+ |
about | DATE | 0.74+ |
80 | QUANTITY | 0.73+ |
close | QUANTITY | 0.72+ |
USA | LOCATION | 0.7+ |
Palo Alto | ORGANIZATION | 0.69+ |
day | QUANTITY | 0.68+ |
last few years | DATE | 0.65+ |
RSAC | EVENT | 0.62+ |
ore | COMMERCIAL_ITEM | 0.6+ |
Cube | COMMERCIAL_ITEM | 0.6+ |
Networks | EVENT | 0.55+ |
X | COMMERCIAL_ITEM | 0.53+ |
2020 | EVENT | 0.5+ |
misto | ORGANIZATION | 0.5+ |
playbook | TITLE | 0.43+ |
X | ORGANIZATION | 0.4+ |
cube | ORGANIZATION | 0.34+ |
Wendy Mars, Cisco | Cisco Live EU Barcelona 2020
>>Live from Barcelona, Spain. It's the Cube covering Cisco Live 2020 right to you by Cisco and its ecosystem partners. >>Welcome back, everyone to the Cube's live coverage Day four of four days of wall to wall action here in Barcelona, Spain, for Cisco Live. 2020. I'm John Furrier with my co host Dave Volante, with a very special guest here to wrap up Cisco Live. The president of Europe, Middle East Africa and Russia. Francisco Wendy Mars Cube Alumni. Great to see you. Thanks for coming on to. I kind of put a book into the show here. Thanks for joining us. >>It's absolutely great to be here. Thank you. >>So what a transformation. As Cisco's business model of continues to evolve, we've been saying brick by brick, we still think big move coming. I think there's more action. I can sense the walls talking to us like Cisco live in the US and more technical announcement. In the next 24 months, you can see you can see where it's going. It's cloud, it's APS. It's policy based program ability. It's really a whole another business model shift for you and your customers. Technology shift in the business model shift. So I want to get your perspective this year. Opening. Keynote. Oh, you let it off Talking about the philosophy of the business model, but also the first presenter was not a networking guy. It was an application person. App dynamics. Yep, this is a shift. What's going on with Cisco? What's happening? What's the story? >>You know, if if you look for all of the work that we're doing is is really driven by what we see from requirements from our customers to change, that's happening in the market and it is all around. You know, if you think digital transformation is the driver organizations now are incredibly interested in, how do they capture that opportunity? How do they use technology to help them? But, you know, if you look at it, really, there's the three items that are so important it's the business model evolution. It's actually the business operations for for organizations. Plus, there people, they're people in the communities within that those three things working together. And if you look at it with, it's so exciting with application dynamics there because if you look for us within Cisco, that linkage off the application layer through into the infrastructure into the network. And bringing that linkage together is the most powerful thing because that's the insights and the value our customers are looking for. >>You know, we've been talking about the the innovation sandwich, you know, you got data in the middle and you've got technology and applications underneath. That's kind of what's going on here, but I'm glad you brought up the part about business model. This is operations and people in communities. During your keynote, you had a slide that laid out three kind of pillars. Yes, people in communities, business model and business operations. There was no 800 series in there. There was no product discussions. This is fundamentally the big shift that business models are changing. I tweeted provocatively, the killer wrap in digital business model. Because you think about it. The applications are the business. What's running under the covers is the technology, but it's all shifting and changing, so every single vertical every single business is impacted by. This is not like a certain secular thing in the industry. This is a real change. Can you describe how those three things are operating with that can >>sure. I think if you look from, you know, so thinking through those three areas. If you look at the actual business model itself, our business models is organizations are fundamentally changing and they're changing towards as consumers. We are all much more specific about what we want. We have incredible choice in the market. We are more informed than ever before. But also we are interested in the values of the organizations that we're getting the capability from us as well as the products and the services that naturally we're looking to gain. So if you look in that business model itself, this is about, you know, organizations making sure they stay ahead from a competitive standpoint about the innovation of portfolio that they're able to bring, but also that they have a strong, strong focus around the experience, that they're customer gains from an application, a touch standpoint that all comes through those different channels, which is at the end of the day, the application. Then if you look as to how do you deliver that capability through the systems, the tools, the processes? As we all evolve, our businesses have to change the dynamic within your organization to cope with that. And then, of course, in driving any transformation, the critical success factor is your people and your culture. You need your teams with you. The way teams operate now is incredibly different. It's no longer command and control. It's agile capability coming together. You need that to deliver on any transformation. Never, never mind. Let it be smooth, you know, in the execution they're all three together. >>But what I like about that model and I have to say, this is, you know, 10 years of doing the Cube, you see that marketing in the vendor community often leads what actually happens. Not surprising as we entered the last decade, there's a lot of talk about Cloud. Well, it kind of was a good predictor. We heard a lot about digital transformation. A lot of people roll their eyes and think it's a buzzword, but we really are. I feel like exiting this cloud era into the digital era. It feels, really, and there are companies that get it and are leaning in. There are others that maybe you're complacent. I'm wondering what you're seeing in Europe just in terms of everybody talks digital, every CEO wants to get it right. But there is complacency. Their financial services said Well, I'm doing pretty well, not on my watch. Others say, Hey, we want to be the disruptors and not get disrupted. What are you seeing in the region? In terms of that sentiment, >>I would say across the region, you know, there will always be verticals and industries that slightly more advanced than others. But I would say that the bulk of conversations that I'm engaged in independence of the industry or the country in which we're having that conversation in there is a acceptance off transfer. Digital transformation is here. It is affecting my business. I if I don't disrupt, I myself will be disrupted and we challenged Help me. So I You know, I'm not disputing the end state and the guidance and support soon drive the transition and risk mitigated manner, and they're looking for help in that there's actually pressure in the board room now around a what are we doing within within organizations within the enterprise service, right of the public sector, any type of style of company. There's that pressure point in the board room of Come on, we need to move it speed. >>Now the other thing about your model is technology plays a role and contribute. It's not the be all end. All that plays a role in each of those the business model of business operations developing and nurturing communities. Can you add more specifics? What role do you see technology in terms of advancing those three years? >>So I think, you know, if you look at it, technology is fundamental to all of those fears in regard. Teoh Theo innovation that differentiation technology could bring the key challenges. One being able to apply it in a manner where you can really see differentiation of value within the business. So and then the customer's organization. Otherwise, it's technology for the sake of technology. So we see very much a movement now to this conversation of talk about the use case, the use cases, the way by which that innovation could be used to deliver value to the organization on also different ways by which a company will work. Look at the collaboration Kate Capability that we announced earlier this week of helping to bring to life that agility. Look at the the APP D discussion of helping the link the layer of the application into the infrastructure of the network to get to root, cause identification quickly and to understand where you may have a problem before you actually arises and causes downtime many, many ways. >>I think the agility message has always been a technical conversation. Agile methodology, technology, softer development, No problem check. That's 10 years ago. But business agility is moving from a buzz word to reality. Exactly. That's what you're kind of getting. >>Their teams have. Teams operate, how they work and being able to be quick, efficient, stand up, stand down and operate in that way. >>You know, we were kind of thinking out loud on the Cube and just riffing with Fabio Gori on your team on Cisco's team about clarification with you, Gene Kim around kind of real time. What was interesting is we're like, Okay, it's been 13 years since the iPhone, and so 13 years of mobile in your territory in Europe, Middle East Africa mobility has been around before the iPhone, so more advanced data privacy much more advanced in your region. So you you you have a region that's pretty much I think, the tell signs for what's going on North American around the world. And so you think about that. You say Okay, how is value created? How the economics changing this is really the conversation about the business model is okay. If the value activities are shifting and being more agile and the economics are changing with SAS, if someone's not on this bandwagon is not an end state discussion, very. It's done Deal. >>Yeah, it's But I think also there were some other conversation which, which are very prevalent here, is in the region so around trust around privacy law, understanding compliance. If you look at data where data resides, portability of that data GDP our came from Europe has pushed out on those conversations will continue as we go over time. And if I also look at, you know, the dialogue that you saw, you know, within World Economic Forum around sustainability that is becoming a key discussion now within government here in Spain, you know, from a climate standpoint and many other areas >>as well. David, I've been riffing around this whole where the innovation is coming from. It's coming from your region, not so much the us US. We've got some great innovations. But look at Blockchain. Us is like, don't touch it pretty progressive outside United States. A little dangerous to, But that's where innovation is coming from, and this is really the key that we're focused on. I want to get your thoughts on. How do you see it going? Next level? The next level. Next. Gen Business model. What's your What's your vision? >>So I think there'll be lots of things if we look at things like it with the introduction. Introduction of artificial intelligence, Robotics capability five g of course, you know, on the horizon we have Mobile World Congress here in Barcelona a few weeks time. And if you talked about with the iPhone, the smartphone, of course, when four g was introduced, no one knew what the use case where that would be. It was the smartphone, which wasn't around at that time. So with five G and the capability there, that will bring again yet more change to the business model for different organizations and capability and what we can bring to market >>the way we think about AI privacy data ownership becomes more important. Some of the things you were talking about before. It's interesting what you're saying. John and Wendy, the GDP are set this standard and and you're seeing in the US they're stovepipes for that standard California is gonna do want every state is gonna have a difference, and that's going to slow things down. It's going to slow down progress. Do you see sort of an extension of GDP, our like framework of being adopted across the region, potentially accelerating some of these sticky issues and public policy issues that can actually move the market forward? >>I think I think that will because I think there'll be more and more if you look at this is terminology of data. Is the new oil What do you do with data? How do you actually get value from that data? Make intelligent business decisions around that? So, yeah, that's critical. But yet if you look for all of ours, we are extremely passionate about where's our data used again? Back to trust and privacy. You need compliance, you need regulation. And I think this is just the beginning off how we will see that >>evolving. You know, when you get your thoughts. David, I've been riffing for 10 years around the death of storage. Long live storage. But data needs to be stored somewhere. Networking is the same kind of conversation just doesn't go away. In fact, there's more pressure now to get the smartphone. That was 13 years ago, before that. Mobility, data and Video. Now super important driver. That's putting more pressure on you guys. And so hey, we did well, networking. So it's kind of like Moore's Law. More networking, more networking. So video and data are now big your thoughts on video and data video. >>But if you look out the Internet of the future, you know what? So if you look for all of us now, we are also demanding as individuals around capability and access of. That's an Internet of the future. The next phase. We want even more so they'll be more more requirement for speed availability, that reliability of service, the way by which we engage in we communicate. There's some fundamentals there, so continuing to grow, which is which is so, so exciting force. >>So you talk about digital transformation that's obviously in the mind of C level executives. I got to believe security is up. There is a topic one other. What's the conversation like in the corner office when you go visit your customers? >>So I think there's a There's a huge excitement around the opportunity, realizing the value of the of the opportunity on. You know, if you look at top of mind conversations around security around, making sure that you can make taint, maintain that fantastic customer experience because if you don't the customer go elsewhere, How do you do that? How do you enrich at all times and also looking at market? Jason sees, you know, as you go in a new tour at senior levels, within, within organizations independent of the industry in which they're in. They're a huge amount of commonalities that we see across those of consistent problems by which organizations are trying to solve. And actually, one of the big questions is what's the pace of change that I should operate us on? When is it too fast? And one is one of my too slow and trying to balance that is exciting but also a challenge for a company. >>So you feel like sentiment. There's still strong, even though we're 10 years into this, this bull market you get Brexit, China tensions with US US elections. But but generally you see sentiment still pretty strong demand. >>So I would say that the the the excitement around technology, the opportunity that is there around technology in its broader sense is greater than ever before. And I think it's on all of us to be able to help organizations to understand how they can consume and see value from us. But it's a fantastic times, >>gets economic indicators way. So >>I know you >>have to be careful, >>but really, the real I think I'm trying to get to is is the mindset of the CEO. The corner office right now is it is that we're gonna we're gonna grow short term by cutting or do we going to be aggressive and go after this incremental opportunity? And it's probably both. You see a lot of automation in cars >>both, and I think if you look fundamentally for organizations, it's it's the three things helped me to make money, how to save money, keep me out of trouble. So those are the pivots they all operate with on, you know, depending on where an organization is in its journey, whether they're start up there in the middle, the more mature and some of the different dynamics and the markets in which they operate in a well, there's all different variables, you know? So it's it's it's mixed. >>Wendy, thanks so much to spend the time to come on. The Cube really appreciate great keynote folks watching. If you haven't seen the keynote opening section, that's good. Second, the business model. I think it's really right on. I think that's gonna be a conversation will continue. So thanks for sharing that before we look. Before we leave, I want to just ask a question around, What? What's going on for you here in Barcelona? As the show winds down, you had all your activities. Take us in the day in the life of what you do. Customer meetings. What were some of those conversations? Take us inside inside. What? What goes on for you here? >>I tell you, it's been an amazing It's been amazing few days, So it's a combination of customer conversations around some of the themes We just talked about conversations with partners. There's investor companies that we invest in a Cisco that I've been spending some time with on also spending time with the teams as well. The definite zone, you know, is amazing. We have this afternoon the closing session where we got a fantastic, um, external guests who's coming in is going to be really exciting as well. And then, of course, the party tonight and will be announcing the next location, which I'm not going to reveal now. Later on today, >>we kind of figured it out because that's our job is to break news, but we're not gonna break it for you to have that. Hey, thank you so much for coming on. Really appreciate. When any market in Europe, Middle East Africa and Russia for Cisco she's got her hand on the pulse and the future is the business model. That's what's going on. Fundamentally radical change across the board in all areas. This is the Cube, bringing you all the action here in Barcelona. Thanks for watching. >>Yeah, yeah,
SUMMARY :
Cisco Live 2020 right to you by Cisco and its ecosystem I kind of put a book into the show here. It's absolutely great to be here. In the next 24 months, you can see you can see where it's going. And if you look at it with, it's so exciting with application dynamics there because if you look for us within You know, we've been talking about the the innovation sandwich, you know, you got data I think if you look from, you know, so thinking through those three areas. But what I like about that model and I have to say, this is, you know, 10 years of doing the Cube, So I You know, I'm not disputing the end state and the guidance and support soon drive the transition What role do you see technology in terms of advancing those So I think, you know, if you look at it, technology is fundamental to all of those fears in regard. I think the agility message has always been a technical conversation. Teams operate, how they work and being able to be quick, So you you you have a region that's pretty much I think, the tell signs for what's going on And if I also look at, you know, the dialogue that you saw, How do you see it going? intelligence, Robotics capability five g of course, you know, on the horizon we have Mobile World Congress Some of the things you were talking about before. Is the new oil What do you do with data? You know, when you get your thoughts. But if you look out the Internet of the future, you know what? What's the conversation like in the corner office when you go visit your customers? You know, if you look at top of mind conversations around security So you feel like sentiment. the opportunity that is there around technology in its broader sense is greater than ever before. So but really, the real I think I'm trying to get to is is the mindset both, and I think if you look fundamentally for organizations, it's it's the three things helped me As the show winds down, you had all your activities. of course, the party tonight and will be announcing the next location, which I'm not going to reveal now. This is the Cube, bringing you all the action here in Barcelona.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Wendy Mars | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
Gene Kim | PERSON | 0.99+ |
Fabio Gori | PERSON | 0.99+ |
13 years | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Russia | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Wendy | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Kate Capability | PERSON | 0.99+ |
Brexit | EVENT | 0.99+ |
13 years ago | DATE | 0.99+ |
tonight | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Middle East Africa | LOCATION | 0.98+ |
three things | QUANTITY | 0.98+ |
three items | QUANTITY | 0.98+ |
SAS | ORGANIZATION | 0.98+ |
three areas | QUANTITY | 0.98+ |
Francisco Wendy | PERSON | 0.97+ |
10 years ago | DATE | 0.97+ |
today | DATE | 0.96+ |
this year | DATE | 0.95+ |
North American | LOCATION | 0.95+ |
four days | QUANTITY | 0.93+ |
three | QUANTITY | 0.93+ |
earlier this week | DATE | 0.93+ |
each | QUANTITY | 0.92+ |
Mobile World Congress | EVENT | 0.92+ |
800 series | QUANTITY | 0.91+ |
this afternoon | DATE | 0.9+ |
Cisco Live | EVENT | 0.9+ |
last decade | DATE | 0.9+ |
Moore's Law | TITLE | 0.87+ |
Cube | ORGANIZATION | 0.84+ |
five g | OTHER | 0.84+ |
Mars | ORGANIZATION | 0.82+ |
Day four | QUANTITY | 0.79+ |
three kind | QUANTITY | 0.78+ |
next 24 months | DATE | 0.78+ |
first presenter | QUANTITY | 0.73+ |
Cube | COMMERCIAL_ITEM | 0.72+ |
EU | LOCATION | 0.72+ |
every single business | QUANTITY | 0.72+ |
Cisco Live 2020 | EVENT | 0.67+ |
five G | TITLE | 0.67+ |
California | LOCATION | 0.66+ |
US | ORGANIZATION | 0.65+ |
2020 | DATE | 0.63+ |
Breaking Analysis: The Trillionaires Club: Powering the Tech Economy
>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hello everyone and welcome this week's episode of theCUBE Insights powered by ETR. And welcome to the Trillionaire's Club. In this Breaking Analysis, I want to look at how the big tech companies have really changed the recipe for innovation in the Enterprise. And as we enter the next decade, I think it's important to sort of reset and re-look at how innovation will determine the winners and losers going forward, including not only the sellers of technology but how technology applied will set the stage for the next 50 years of economic growth. Here's the premise that I want to put forth to you. The source of innovation in the technology business has been permanently altered. There's a new cocktail of innovation, if you will, that will far surpass Moore's Law in terms of it's impact on the industry. For 50 years we've marched to the cadence of that Moore's Law, that is the doubling of transistor counts every 18 months, as shown in the left-hand side of this chart. And of course this translated as we know, into a chasing of the chips, where by being first with the latest and greatest microprocessor brought competitive advantage. We saw Moore's Law drive the PC era, the client server era, and it even powered the internet, notwithstanding the effects of Metcalfe's Law. But there's a new engine of innovation or what John Furrier calls the "Innovation Cocktail," and that's shown in the right-hand of this slide where data plus machine intelligence or AI and Cloud are combinatorial technologies that will power innovation for the next 20 plus years. 10 years of gathering big data have put us in a position to now apply AI. Data is plentiful but insights are not and AI unlocks those insights. The Cloud brings three things, agility, scale, and the ability to fail quickly and cheaply. So, it's these three elements and how they are packaged and applied that will in my view determine winners and losers in the next decade and beyond. Now why is this era now suddenly upon us? Well I would argue there are three main factors. One is cheap storage and compute combined with alternative processor types, like GPUs that can power AI. And the era of data is here to stay. This next chart from Dave Moschella's book, "Seeing Digital," really underscores this point. Incumbent organizations born in the last century organized largely around human expertise or processes or hard assets like factories. These were the engines of competitive advantage. But today's successful organizations put data at the core. They live by the mantra of data driven. It is foundational to them. And they organize expertise, processes and people around the data. All you got to do to drive this point home is look at the market caps of the top five public companies in the U.S. Stock Market, Apple, Microsoft, Google, Amazon, and Facebook. I call this chart the Cuatro Comas! as a shout out to Russ Hanneman, the crazy billionaire supporting, was a supporting character in the Silicon Valley series. Now each of these companies, with the exception of Facebook, has hit the trillion dollar club. AWS, like Mr. Hanneman, hit the trillion dollar club status back in September 2018 but fell back down and lost a comma. These five data-driven companies have surpassed big oil and big finance. I mean, the next closest company is Berkshire at 566 billion. And I would argue that if it hadn't been for the fake news scandal, Facebook probably would be right there with these others. Now, with the exception of Apple, these companies, they're not highly valued because of the goods they pump out, rather, and I would argue even in the case of Apple, their highly valued because they're leaders in digital and in the best position to apply machine intelligence to massive stores of data that they've collected. And they have massive scale, thanks to the Cloud. Now, I get that the success of some of these companies is largely driven by the consumer but the consumerization of IT makes this even more relevant, in my opinion. Let's bring in some ETR data to see how this translates into the Enterprise tech world. This chart shows market share from Microsoft, AWS, Apple iPhone, and Google in the Enterprise all the way back to 2010. Now I get that the iPhone is a bit of a stretch here but stick with me. Remember, market share in ETR terms is a measure of pervasiveness in the data set. Look at how Microsoft has held it's ground. And you can see the steady rise of AWS and Google. Now if I superimpose traditional Enterprise players like Cisco, IBM, or Hewlett or even Dell, that is companies that aren't competing with data at the core of their business, you would see a steady decline. I am required to black out January 2020 as you probably remember, but that data will be out soon and made public shortly after ETR exits its self-imposed quiet period. Now Apple iPhone is not a great proxy but Apple, they're not an Enterprise tech company, but it's data that I can show but now I would argue again that Apple's real value and a key determinate of their success going forward, lies in how it uses data and applies machine intelligence at scale over the next decade to compete in apps and digital services, content, and other adjacencies. And I would say these five leaders and virtually any company in the next decade, this applies. Look, digital means data and digital businesses are data driven. Data changes how we think about competition. Just look at Amazon's moves in content, grocery, logistics. Look at Google in automobiles, Apple and Amazon in music. You know, interestingly Microsoft positions this as a competitive advantage, especially in retail. For instance, touting Walmart as a partner, not a competitor, a la Amazon. The point is, that digital data, AI, and Cloud bring forth highly disruptive possibilities and are enabling these giants to enter businesses that previously were insulated from the outsiders. And in the case of the Cloud, it's paying the way. Just look at the data from Amazon. The left bar shows Amazon's revenue. AWS represents only 12% of the total company's turnover. But as you can see on the right-hand side, it accounts for almost half of the company's operating income. So, the Cloud is essentially funding Amazon's entrance into all these other businesses and powering its scale. Now let's bring in some ETR data to show what's happening in the Enterprise in the terms of share shifts. This chart is a double-Y axis that shows spending levels on the left-hand side, represented by the bars, and the average change in spending, represented by the dots. Focus for a second on the dots and the percentages. Container orchestrations at 29% change. Container platforms at 19.7%. These are Cloud-native technologies and customers are voting with their wallets. Machine learning and AI, nearly 18% change. Cloud computing itself still in the 16% range, 10 plus years on. Look at analytics and big data in the double digits still, 10 years into the big data movement. So, you can see the ETR data shows that the spending action is in and around Cloud, AI, and data. And in the red, look at the Moore's Law techs like servers and storage. Now, this isn't to say that those go away. I fully understand you need servers, and storage, and networking, and database, and software to power the Cloud but this data shows that right now, these discreet cocktail technologies are gaining spending momentum. So, the question I want to leave you with is, what does this mean for incumbents? Those that are not digital-natives or not born in the Cloud? Well, the first thing I'd point out is that while the trillionaires, they look invincible today, history suggests that they are not invulnerable. The rise of China, India, open-source, peer-to-peer models, open models, could coalesce and disrupt these big guys if they miss a step or a cycle. The second point I would make is that incumbents are often too complacent. More often than not, in my experience, there is complacency and there will be a fallout. I hear a lot of lip service given to digital and data driven but often I see companies that talk the talk but they don't walk the walk. Change will come and the incumbents will be disrupted and that is going to cause action at the top. The good news is that the incumbents, they don't have to build the tech. They can compete with the disruptors by applying machine intelligence to their unique data sets and they can buy technologies like AI and the Cloud from suppliers. The degree to which they are comfortable buying from these supplies, who may also be competitors, will play out over time but I would argue that building that competitive advantage sooner rather than later with data and learning to apply machine intelligence and AI to their unique businesses, will allow them to thrive and protect their existing businesses and grow. These markets are large and the incumbents have inherent advantages in terms of resources, relationships, brand value, customer affinity, and domain knowledge that if they apply and transform from the top with strong leadership, they will do very, very well in my view. This is Dave Vellante signing out from this latest episode of theCUBE Insights powered by ETR. Thanks for watching everybody. We'll see you next time and please feel free to comment. In my LinkedIn, you can DM me @dvellante and don't forget we turned this into a podcast so check that out at your favorite podcast player. Thanks again.
SUMMARY :
From the SiliconANGLE Media office and the ability to fail quickly and cheaply.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Dave Moschella | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Walmart | ORGANIZATION | 0.99+ |
Hewlett | ORGANIZATION | 0.99+ |
September 2018 | DATE | 0.99+ |
January 2020 | DATE | 0.99+ |
19.7% | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
29% | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
10 plus years | QUANTITY | 0.99+ |
16% | QUANTITY | 0.99+ |
Hanneman | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
second point | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
@dvellante | PERSON | 0.99+ |
Russ Hanneman | PERSON | 0.99+ |
566 billion | QUANTITY | 0.99+ |
three elements | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
five leaders | QUANTITY | 0.99+ |
Metcalfe | PERSON | 0.99+ |
Moore's Law | TITLE | 0.99+ |
each | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
last century | DATE | 0.98+ |
three main factors | QUANTITY | 0.98+ |
next decade | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
Seeing Digital | TITLE | 0.97+ |
Trillionaire's Club | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
ETR | ORGANIZATION | 0.96+ |
12% | QUANTITY | 0.96+ |
Berkshire | LOCATION | 0.96+ |
today | DATE | 0.96+ |
trillion dollar | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
five public companies | QUANTITY | 0.95+ |
China | LOCATION | 0.94+ |
Cloud | TITLE | 0.94+ |
Silicon Valley | LOCATION | 0.94+ |
Moore | ORGANIZATION | 0.94+ |
U.S. | LOCATION | 0.94+ |
three things | QUANTITY | 0.92+ |
SiliconANGLE | ORGANIZATION | 0.92+ |
five data-driven companies | QUANTITY | 0.88+ |
first thing | QUANTITY | 0.87+ |
India | LOCATION | 0.85+ |
ORGANIZATION | 0.85+ | |
years | QUANTITY | 0.79+ |
nearly 18% | QUANTITY | 0.78+ |
Buno Pati, Infoworks io | CUBEConversation January 2020
>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hello everyone, and welcome to this CUBE Conversation. You know, theCUBE has been following the trends in the so-called big data space since 2010. And one of the things that we reported on for a number of years is the complexity involved in wrangling and making sense out of data. The allure of this idea of no schema on write and very low cost platforms like Hadoop became a data magnet. And for years, organizations would shove data into a data lake. And of course the joke was it was became a data swamp. And organizations really struggled to realize the promised return on their big data investments. Now, while the cloud certainly simplified infrastructure deployment, it really introduced a much more complex data environment and data pipeline, with dozens of APIs and a mind-boggling array of services that required highly skilled data engineers to properly ingest, shape, and prepare that data, so that it could be turned into insights. This became a real time suck for data pros, who spent 70 to 80% of their time wrestling data. A number of people saw the opportunity to solve this problem and automate the heavy lift of data, and simplify the process to adjust, synchronize, transform, and really prepare data for analysis. And one of the companies that is attacking this challenge is InfoWorks. And with me to talk about the evolving data landscape is Buno Pati, CEO of InfoWorks. Buno, great to see you, thanks for coming in. >> Well thank you Dave, thanks for having me here. >> You're welcome. I love that you're in Palo Alto, you come to MetroWest in Boston to see us (Buno laughs), that's great. Well welcome. So, you heard my narrative. We're 10 years plus into this big data theme and meme. What did we learn, what are some of the failures and successes that we can now build on, from your point of view? >> All right, so Dave, I'm going to start from the top, with why big data, all right? I think this big data movement really started with the realization by companies that they need to transform their customer experience and their operations, in order to compete effectively in this increasingly digital world, right? And in that context, they also realized very quickly that data was the key asset on which this transformation would be built. So given that, you look at this and say, "What is digital transformation really about?" It is about competing with digital disruption, or fending off digital disruption. And this has become, over time, an existential imperative. You cannot survive and be relevant in this world without leveraging data to compete with others who would otherwise disrupt your business. >> You know, let's stay on that for a minute, because when we started the whole big data, covering that big data space, you didn't really hear about digital transformation. That's sort of a more recent trend. So I got to ask you, what's the difference between a business and a digital business, in your view? >> That is the foundational question behind big data. So if you look at a digital native, there are many of them that you can name. These companies start by building a foundational platform on which they build their analytics and data programs. It gives them a tremendous amount of agility and the right framework within which to build a data-first strategy. A data-first strategy where business information is persistently collected and used at every level of the organization. Furthermore, they take this and they automate this process. Because if you want to collect all your data and leverage it at every part of the business, it needs to be a highly automated system, and it needs to be able to seamlessly traverse on-premise, cloud, hybrid, and multi-cloud environments. Now, let's look at a traditional business. In a traditional enterprise, there is no foundational platform. There are things like point tools for ETL, and data integration, and you can name a whole slew of other things, that need to be stitched together and somehow made to work to deliver data to the applications that consume. The strategy is not a data-first strategy. It is use case by use case. When there is a use case, people go and find the data, they gather the data, they transform that data, and eventually feed an application. A process that can take months to years, depending on the complexity of the project that they're trying. And they don't automate this. This is heavily dependent, as you pointed out, on engineering talent, highly skilled engineering talent that is scarce. And they have not seamlessly traversed the various clouds and on-premise environments, but rather fragmented those environments, where individual teams are focused on a single environment, building different applications, using different tools, and different infrastructure. >> So you're saying the digital native company puts data at the core. They organize around that data, as opposed to maybe around a bottling plant, or around people. And then they leverage that data for competitive advantage through a platform that's kind of table stakes. And then obviously there's cultural aspects and other skills that they need to develop, right? >> Yeah, they have an ability which traditional enterprises don't. Because of this choice of a data-first strategy with a foundational platform, they have the ability to rapidly launch analytics use cases and iterate all them. That is not possible in a traditional or legacy environment. >> So their speed to market and time to value is going to be much better than their competition. This gets into the risk of disruption. Sometimes we talk about cloud native and cloud naive. You could talk about digital native and digital naive. So it's hard for incumbents to fend off the disrupters, and then ultimately become disrupters themselves. But what are you seeing in terms of some of the trends where organizations are having success there? >> One of the key trends that we're seeing, or key attributes of companies that are seeing a lot of success, is when they have organized themselves around their data. Now, what do I mean by that? This is usually a high-level mandate coming down from the top of the company, where they're forming centralized groups to manage the data and make it available for the rest of the organization to use. There are a variety of names that are being used for this. People are calling it their data fabric. They're calling it data as a service, which is pretty descriptive of what it ends up being. And those are terms that are all sort of representing the same concept of a centralized environment and, ideally, a highly automated environment that serves the rest of the business with data. And the goal, ultimately, is to get any data at any time for any application. >> So, let's talk a little bit about the cloud. I mentioned up front that the cloud really simplified infrastructure deployment, but it really didn't solve this problem of, we talked about in terms of data wrangling. So, why didn't it solve that problem? And you got companies like Amazon and Google and Microsoft, who are very adept at data. They're some of these data-first companies. Why is it that the cloud sort of in and of itself has not been able to solve this problem? >> Okay, so when you say solve this problem, it sort of begs the question, what's the goal, right? And if I were to very simply state the goal, I would call it analytics agility. It is gaining agility with analytics. Companies are going from a traditional world, where they had to generate a handful of BI and other reporting type of dashboards in a year, to where they literally need to generate thousands of these things in a year, to run the business and compete with digital disruption. So agility is the goal. >> But wait, the cloud is all about agility, is it not? >> It is, when you talk about agility of compute and storage infrastructure. So, there are three layers to this problem. The first is, what is the compute and storage infrastructure? The cloud is wonderful in that sense. It gives you the ability to rapidly add new infrastructure and spin it down when it's not in use. That is a huge blessing, when you compare it to the six to nine months, or perhaps even longer, that it takes companies to order, install, and test hardware on premise, and then find that it's only partially used. The next layer on that is what is the operating system on which my data and analytics are going to be run? This is where Hadoop comes in. Now, Hadoop is inherently complex, but operating systems are complex things. And Spark falls in that category. Databricks has taken some of the complexity out of running Spark because of their sort of manage service type of offering. But there's still a missing layer, which leverages that infrastructure and that operating system to deliver this agility where users can access data that they need anywhere in the organization, without intensely deep knowledge of what that infrastructure is and what that operating system is doing underneath. >> So, in my up front narrative, I talked about the data pipeline a little bit. But I'm inferring from your comments on platform that it's more than just this sort of narrow data pipeline. There's a macro here. I wonder if you could talk about that a little bit. >> Yeah. So, the data pipeline is one piece of the puzzle. What needs to happen? Data needs to be ingested. It needs to be brought into these environments. It has to be kept fresh, because the source data is persistently changing. It needs to be organized and cataloged, so that people know what's there. And from there, pipelines can be created that ultimately generate data in a form that's consumable by the application. But even surrounding that, you need to be able to orchestrate all of this. Typical enterprise is a multi-cloud enterprise. 80% of all enterprises have more than one cloud that they're working on, and on-premise. So if you can't orchestrate all of this activity in the pipelines, and the data across these various environments, that's not a complete solution either. There's certainly no agility in that. Then there's governance, security, lineage. All of this has to be managed. It's not simply creation of the pipeline, but all these surrounding things that need to happen in order for analytics to run at-scale within enterprises. >> So the cloud sort of solved that layer one problem. And you certainly saw this in the, not early days, but sort of mid-days of Hadoop, where the cloud really became the place where people wanted to do a lot of their Hadoop workloads. And it was kind of ironic that guys like Hortonworks, and Cloudera and MapR really didn't have a strong cloud play. But now, it's sort of flipping back where, as you point out, everybody's multi-cloud. So you have to include a lot of these on-prem systems, whether it's your Oracle database or your ETL systems or your existing data warehouse, those are data feeds into the cloud, or the digital incumbent who wants to be a digital native. They can't just throw all that stuff away, right? So you're seeing an equilibrium there. >> An equilibrium between ... ? >> Yeah, between sort of what's in the cloud and what's on-prem. Let me ask it this way: If the cloud is not a panacea, is there an approach that does really solve the problem of different datasets, the need to ingest them from different clouds, on-prem, and bring them into a platform that can be analyzed and drive insights for an organization? >> Yeah, so I'm going to stay away from the word panacea, because I don't think there ever is really a panacea to any problem. >> That's good, that means we got a good roadmap for our business then. (both laugh) >> However, there is a solution. And the solution has to be guided by three principles. Number one, automation. If you do not automate, the dependence on skill talent is never going to go away. And that talent, as we all know, is very very scarce and hard to come by. The second thing is integration. So, what's different now? All of these capabilities that we just talked about, whether it's things like ETL, or cataloging, or ingesting, or keeping data fresh, or creating pipelines, all of this needs to be integrated together as a single solution. And that's been missing. Most of what we've seen is point tools. And the third is absolutely critical. For things to work in multi-cloud and hybrid environments, you need to introduce a layer of abstraction between the complexity of the underlying systems and the user of those systems. And the way to think about this, Dave, is to think about it much like a compiler. What does a compiler do, right? You don't have to worry about what Intel processor is underneath, what version of your operating system you're running on, what memory is in the system. Ultimately, you might-- >> As much as we love assembly code. >> As much as we love assembly code. Now, so take the analogy a little bit further, there was a time when we wrote assembly code because there was no compiler. So somebody had to sit back and say, "Hey, wouldn't it be nice if we abstracted away from this?" (both laugh) >> Okay, so this sort of sets up my next question, which is, is this why you guys started InfoWorks? Maybe you could talk a little bit about your why, and kind of where you fit. >> So, let me give you the history of InfoWorks. Because the vision of InfoWorks, believe it or not, came out of a rear view mirror. Looking backwards, not forwards. And then predicting the future in a different manner. So, Amar Arsikere is the founder of InfoWorks. And when I met him, he had just left Zynga, where he was the general manager of their gaming platform. What he told me was very very simple. He said he had been at Google at a time when Google was moving off of the legacy systems of, I believe it was Netezza, and Oracle, and a variety of things. And they had just created Bigtable, and they wanted to move and create a data warehouse on Bigtable. So he was given that job. And he led that team. And that, as you might imagine, was this massive project that required a high degree of automation to make it all come together. And he built that, and then he built a very similar system at Zynga, when he was there. These foundational platforms, going back to what I was talking about before digital days. When I met him, he said, "Look, looking back, "Google may have been the only company "that needed such a platform. "But looking forward, "I believe that everyone's going to need one." And that has, you know, absolute truth in it, and that's what we're seeing today. Where, after going through this exercise of trying to write machine code, or assembly code, or whatever we'd like to call it, down at the detailed, complex level of an operating system or infrastructure, people have realized, "Hey, I need something much more holistic. "I need to look at this from a enterprise-wide perspective. "And I need to eliminate all of this dependence on," kind of like the cloud plays a role because it eliminates some of the dependence, or the bottlenecks around hardware and infrastructure. "And ultimately gain a lot more agility "than I'm able to do with legacy methodology." So you were asking early on, what are the lessons learned from that first 10 years? And lot of technology goes through these types of cycles of hype and disillusionment, and we all know the curve. I think there are two key lessons. One is, just having a place to land your data doesn't solve your problem. That's the beginning of your problems. And the second is that legacy methodologies do not transfer into the future. You have to think differently. And looking to the digital natives as guides for how to think, when you're trying to compete with them is a wonderful perspective to take. >> But those legacy technologies, if you're an incumbent, you can't just rip 'em and throw 'em out and convert. You going to use them as feeders to your digital platform. So, presumably, you guys have products. You call this space Enterprise Data Ops and Orchestration, EDO2. Presumably you have products and a portfolio to support those higher layer challenges that we talked about, right? >> Yeah, so that's a really important question. No, you don't rip and replace stuff. These enterprises have been built over years of acquisitions and business systems. These are layers, one on top of another. So think about the introduction of ERP. By the way, ERP is a good analogy of to what happened, because those were point tools that were eventually combined into a single system called ERP. Well, these are point capabilities that are being combined into a single system for EDO2, or Enterprise Data Operations and Orchestration. The old systems do not go away. And we are seeing some companies wanting to move some of their workloads from old systems to new systems. But that's not the major trend. The major trend is that new things that get done, the things that give you holistic views of the company, and then analytics based on that holistic view, are all being done on the new platforms. So it's a layer on top. It's not a rip and replace of the layers underneath. What's in place stays in place. But for the layer on top, you need to think differently. You cannot use all the legacy methodologies and just say that's going to apply to the new platform or new system. >> Okay, so how do you engage with customers? Take a customer who's got, you know, on-prem, they've got legacy infrastructure, they don't want to get disrupted. They want to be a digital native. How do you help them? You know, what do I buy from you? >> Yeah, so our product is called DataFoundry. It is a EDO2 system. It is built on the three principles, founding principles, that I mentioned earlier. It is highly automated. It is integrated in all the capabilities that surround pipelines, perhaps. And ultimately, it's also abstracting. So we're able to very easily traverse one cloud to another, or on-premise to the cloud, or even back. There are some customers that are moving some workloads back from the cloud. Now, what's the benefit here? Well first of all, we lay down the foundation for digital transformation. And we enable these companies to consolidate and organize their data in these complex hybrid, cloud, multi-cloud environments. And then generate analytics use cases 10x faster with about tenth of the resource. And I'm happy to give you some examples on how that works. >> Please do. I mean, maybe you could share some customer examples? >> Yeah, absolutely. So, let me talk about Macy's. >> Okay. >> Macy's is a customer of ours. They've been a customer for about, I think about 14 months at this point in time. And they had built a number of systems to run their analytics, but then recognized what we're seeing other companies recognize. And that is, there's a lot of complexity there. And building it isn't the end game. Maintaining it is the real challenge, right? So even if you have a lot of talent available to you, maintaining what you built is a real challenge. So they came to us. And within a period of 12 months, I'll just give you some numbers that are just mind-blowing. They are currently running 165,000 jobs a month. Now, what's a job? A job is a ingestion job, or a synchronization job, or a transformation. They have launched 431 use cases over a period of 12 months. And you know what? They're just ramping. They will get to thousands. >> Scale. >> Yeah, scale. And they have ingested a lot of data, brought in a lot of DataSources. So to do that in a period of 12 months is unheard of. It does not happen. Why is it important for them? So what problem are they trying to solve? They're a retailer. They are being digitally disruptive like (chuckles) no one else. >> They have an Amazon war room-- >> Right. >> No doubt. >> And they have had to build themselves out as a omni-channel retailer now. They are online, they are also with brick and mortar stores. So you take a look at this. And the key to competing with digital disrupters is the customer experience. What is that experience? You're online, how does that meld with your in-store experience? What happens if I buy online and return something in a store? How does all this come together into a single unified experience for the consumer? And that's what they're chasing. So that was the first application that they came to us with. They said, "Look, let us go into a customer 360. "Let us understand the entirety "of that customer's interaction "and touchpoints with our business. "And having done so, we are in a position "to deliver a better experience." >> Now that's a data problem. I mean, different DataSources, and trying to understand 360, I mean, you got data all over the place. >> All over the place. (speaking simultaneously) And there's historical data, there's stuff coming in from, you know, what's online, what's in the store. And then they progress from there. I mean, they're not restricting it to customer experience and selling. They're looking at merchandising, and inventory, and fulfillment, and store operations. Simple problem. You order something online, where do I pull this from? A store or a warehouse? >> So this is, you know, big data 2.0, just to use a sort of silly term. But it's really taking advantage of all the investment. I've often said, you know, Hadoop, for all the criticism it gets, it did lower our cost of getting data into, you know, at least one virtual place. And it got us thinking about how to get insights out of data. And so, what you're describing is the ability to operationalize your data initiatives at scale. >> Yeah, you can absolutely get your insights off of Hadoop. And I know people have different opinions of Hadoop, given their experience. But what they don't have, what these customers have not achieved yet, most of them, is that agility, right? So, how easily can you get your insights off of Hadoop? Do I need to hire a boatload of consultants who are going to write code for me, and shovel data in, and create these pipelines, and so forth? Or can I do this with a click of a button, right? And that's the difference. That is truly the difference. The level of automation that you need, and the level of abstraction that you need, away from this complexity, has not been delivered. >> We did, in, it must have been 2011, I think, the very first big data market study from anybody in the world, and put it out on, you know, Wikibon, free research. And one of the findings was (chuckles) this is a huge services business. I mean, the professional service is where all the money was going to flow because it was so complicated. And that's kind of exactly what happened. But now we're entering, really it seems like a phase where you can scale, and operationalize, and really simplify, and really focus your attention on driving business value, versus making stuff work. >> You are absolutely correct. So I'll give you the numbers. 55% of this industry is services. About 30% is software, and the rest is hardware. Break it down that way. 55%. So what's going on? People will buy a big data system. Call it Hadoop, it could be something in the cloud, it could be Databricks. And then, this is welcome to the world of SIs. Because at this point, you need these SIs to write code and perform these services in order to get any kind of value out of that. And look, we have some dismal numbers that we're staring at. According to Gardner, only 17% of those who have invested in Hadoop have anything in production. This is after how many years? And you look at surveys from, well, pick your favorite. They all look the same. People have not been able to get the value out of this, because it is too hard. It is too complex and you need too many consultants (laughs) delivering services for you to make this happen. >> Well, what I like about your story, Buno, is you're not, I mean, a lot of the data companies have pivoted to AI. Sort of like, we have a joke, ya know, same wine, new bottle. But you're not talking about, I mean sure, machine intelligence, I'm sure, fits in here, but you're talking about really taking advantage of the investments that you've made in the last decade and helping incumbents become digital natives. That sounds like it's at least a part of your mission here. >> Not become digital natives, but rather compete with them. >> Yeah, right, right. >> Effectively, right? >> Yep, okay. >> So, yeah, that is absolutely what needs to get done. So let me talk for a moment about AI, all right? Way back when, there was another wave of AI in the late 80s. I was part of that, I was doing my PhD at the time. And that obviously went nowhere, because we didn't have any data, we didn't have enough compute power or connectivity. Pretty inert. So here it is again. Very little has changed. Except for we do have the data, we have the connectivity, and we have the compute power. But do we really? So what's AI without the data? Just A, right? There's nothing there. So what's missing, even for AI and ML to be, and I believe these are going to be powerful game changers. But for them to be effective, you need to provide data to it, and you need to be able to do so in a very agile way, so that you can iterate on ideas. No one knows exactly what AI solution is going to solve your problem or enhance your business. This is a process of experimentation. This is what a company like Google can do extraordinarily well, because of this foundational platform. They have this agility to keep iterating, and experimenting, and trying ideas. Because without trying them, you will not discover what works best. >> Yeah, I mean, for 50 years, this industry has marched to the cadence of Moore's Law, and that really was the engine of innovation. And today, it's about data, applying machine intelligence to that data. And the cloud brings, as you point out, agility and scale. That's kind of the new cocktail for innovation, isn't it? >> The cloud brings agility and scale to the infrastructure. >> In low risk, as you said, right? >> Yeah. >> Experimentation, fail fast, et cetera. >> But without an EDO2 type of system, that gives you a great degree of automation, you could spend six months to run one experiment with AI. >> Yeah, because-- >> In gathering data and feeding it to it. >> 'Cause if the answer is people and throwing people at the problem, then you're not going to scale. >> You're not going to scale, and you're never going to really leverage AI and ML capabilities. You need to be able to do that not in six months, in six days, right, or less. >> So let's talk about your company a little bit. Can you give us the status, you know, where you're at? As their newly minted CEO, what your sort of goals are, milestones that we should be watching in 2020 and beyond? >> Yeah, so newly minted CEO, I came in July of last year. This has been an extraordinary company. I started my journey with this company as an investor. And it was funded by actually two funds that I was associated with, first being Nexus Venture Partners, and then Centerview Capital, where I'm still a partner. And myself and my other two partners looked at the opportunity and what the company had been able to do. And in July of last year, I joined as CEO. My partner, David Dorman, who used to be CEO of AT&T, he joined as chairman. And my third partner, Ned Hooper, joined as President and Chief Operating Officer. Ned used to be the Chief Strategy Officer of Cisco. So we pushed pause on the funding, and that's about as all-in as a fund can get. >> Yeah, so you guys were operational experts that became investors, and said, "Okay, we're going to dive back in "and actually run the business." >> And here's why. So we obviously see a lot of companies as investors, as they go out and look for funding. There are three things that come together very rarely. One is a massive market opportunity combined with the second, which is the right product to serve that opportunity. But the third is pure luck, timing. (Dave chuckles) It's timing. And timing, you know, it's a very very challenging thing to try to predict. You can get lucky and get it right, but then again, it's luck. This had all three. It was the absolute perfect time. And it's largely because of what you described, the 10 years of time that had elapsed, where people had sort of run the experiment and were not going to get fooled again by how easy this supposed to be by just getting one piece or the other. They recognized that they need to take this holistic approach and deploy something as an enterprise-wide platform. >> Yeah, I mean, you talk about a large market, I don't even know how you do a TAM, what's the TAM? It's data. (laughs) You know, it's the data universe, which is just, you know, massive. So, I have to ask you a question as an investor. I think you've raised, what 50 million, is that right? >> We've raised 50 million. The last round was led by NEA. >> Right, okay. You got great investors, hefty amount. Although, you know, in this day and age, you know, you're seeing just outrageous amounts being raised. Software obviously is a capital efficient business, but today you need to raise a lot of money for promotion, right, to get your name out there. What's your thoughts on, as a Silicon Valley investor, as this wave, I mean, get it while you can, I guess. You know, we're in the 10th year of this boom market. But your thoughts? >> You're asking me to put on my other hat. (Dave laughs) I think companies have, in general, raised too much money at too high a value too fast. And there's a penalty for that. And the down round IPO, which has become fashionable these days, is one of those penalties. It's a clear indication. Markets are very rational, public markets are very rational. And the pricing in a public market, when it's significantly below the pricing of in a private market, is telling you something. So, we are a little old-fashioned in that sense. We believe that a company has to lay down the right foundation before it adds fuel to the mix and grows. You have to have evidence that the machinery that you build, whether it's for sales, or marketing, or other go-to-market activities, or even product development, is working. And if you do not see all of those signs, you're building a very fragile company. And adding fuel in that setting is like flooding the carburetor. You don't necessarily go faster. (laughs) You just-- >> Consume more. >> You consume more. So there's a little bit of, perhaps, old-fashioned discipline that we bring to the table. And you can argue against it. You can say, "Well, why don't you just raise a lot of money, "hire a lot of sales guys, and hope for the best?" >> See what sticks? (laughs) >> Yeah. We are fully expecting to build a large institution here. And I use that word carefully. And for that to happen, you need the right foundation down first. >> Well, that resonates with us east coast people. So, Buno, thanks very much for comin' on theCUBE and sharing with us your perspectives on the marketplace. And best of luck with InfoWorks. >> Thank you, Dave. This has been a pleasure. Thank you for having me here. >> All right, we'll be watching, thank you. And thank you for watching, everybody. This is Dave Vellante for theCUBE. We'll see ya next time. (upbeat music fades out)
SUMMARY :
From the SiliconANGLE media office and simplify the process to adjust, synchronize, transform, and successes that we can now build on, that they need to transform their customer experience So I got to ask you, what's the difference and it needs to be able to seamlessly traverse on-premise, and other skills that they need to develop, right? they have the ability to rapidly launch analytics use cases is going to be much better than their competition. for the rest of the organization to use. Why is it that the cloud sort of in and of itself So agility is the goal. and that operating system to deliver this agility I talked about the data pipeline a little bit. All of this has to be managed. And you certainly saw this in the, not early days, the need to ingest them from different clouds, on-prem, Yeah, so I'm going to stay away from the word panacea, That's good, that means we got a good roadmap And the solution has to be guided by three principles. So somebody had to sit back and say, and kind of where you fit. And that has, you know, absolute truth in it, You going to use them as feeders to your digital platform. But for the layer on top, you need to think differently. Take a customer who's got, you know, on-prem, And I'm happy to give you some examples on how that works. I mean, maybe you could share some customer examples? So, let me talk about Macy's. And building it isn't the end game. So to do that in a period of 12 months is unheard of. And the key to competing with digital disrupters you got data all over the place. And then they progress from there. So this is, you know, big data 2.0, and the level of abstraction that you need, And one of the findings was (chuckles) And you look at surveys from, well, pick your favorite. I mean, a lot of the data companies have pivoted to AI. and I believe these are going to be powerful game changers. And the cloud brings, as you point out, that gives you a great degree of automation, and feeding it to it. 'Cause if the answer You need to be able to do that not in six months, Can you give us the status, you know, where you're at? And in July of last year, I joined as CEO. Yeah, so you guys were operational experts And it's largely because of what you described, So, I have to ask you a question as an investor. The last round was led by NEA. right, to get your name out there. You have to have evidence that the machinery that you build, And you can argue against it. And for that to happen, And best of luck with InfoWorks. Thank you for having me here. And thank you for watching, everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
David Dorman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Dave Vellante | PERSON | 0.99+ |
Zynga | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
January 2020 | DATE | 0.99+ |
Ned Hooper | PERSON | 0.99+ |
Amar Arsikere | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Buno | PERSON | 0.99+ |
Centerview Capital | ORGANIZATION | 0.99+ |
Ned | PERSON | 0.99+ |
Nexus Venture Partners | ORGANIZATION | 0.99+ |
third partner | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
two partners | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
six days | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
first application | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
10th year | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
InfoWorks | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
nine months | QUANTITY | 0.99+ |
50 million | QUANTITY | 0.99+ |
two funds | QUANTITY | 0.99+ |
Buno Pati | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
431 use cases | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Netezza | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
two key lessons | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
single | QUANTITY | 0.98+ |
three layers | QUANTITY | 0.98+ |
late 80s | DATE | 0.98+ |
MapR | ORGANIZATION | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
dozens | QUANTITY | 0.98+ |
three principles | QUANTITY | 0.98+ |
10x | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
17% | QUANTITY | 0.98+ |
2010 | DATE | 0.97+ |
first 10 years | QUANTITY | 0.97+ |
Cloudera | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
Gardner | PERSON | 0.96+ |
about 14 months | QUANTITY | 0.96+ |
Bill Vass, AWS | AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel. Along with it's ecosystem partners. >> Okay, welcome back everyone. It's theCUBE's live coverage here in Las Vegas for Amazon Web Series today, re:Invent 2019. It's theCUBE's seventh year covering re:Invent. Eight years they've been running this event. It gets bigger every year. It's been a great wave to ride on. I'm John Furrier, my cohost, Dave Vellante. We've been riding this wave, Dave, for years. It's so exciting, it gets bigger and more exciting. >> Lucky seven. >> This year more than ever. So much stuff is happening. It's been really exciting. I think there's a sea change happening, in terms of another wave coming. Quantum computing, big news here amongst other great tech. Our next guest is Bill Vass, VP of Technology, Storage Automation Management, part of the quantum announcement that went out. Bill, good to see you. >> Yeah, well, good to see you. Great to see you again. Thanks for having me on board. >> So, we love quantum, we talk about it all the time. My son loves it, everyone loves it. It's futuristic. It's going to crack everything. It's going to be the fastest thing in the world. Quantum supremacy. Andy referenced it in my one-on-one with him around quantum being important for Amazon. >> Yes, it is, it is. >> You guys launched it. Take us through the timing. Why, why now? >> Okay, so the Braket service, which is based on quantum notation made by Dirac, right? So we thought that was a good name for it. It provides for you the ability to do development in quantum algorithms using gate-based programming that's available, and then do simulation on classical computers, which is what we call our digital computers today now. (men chuckling) >> Yeah, it's a classic. >> These are classic computers all of a sudden right? And then, actually do execution of your algorithms on, today, three different quantum computers, one that's annealing and two-bit gate-based machines. And that gives you the ability to test them in parallel and separate from each other. In fact, last week, I was working with the team and we had two machines, an ion trap machine and an electromagnetic tunneling machine, solving the same problem and passing variables back and forth from each other, you could see the cloud watch metrics coming out, and the data was going to an S3 bucket on the output. And we do it all in a Jupiter notebook. So it was pretty amazing to see all that running together. I think it's probably the first time two different machines with two different technologies had worked together on a cloud computer, fully integrated with everything else, so it was pretty exciting. >> So, quantum supremacy has been a word kicked around. A lot of hand waving, IBM, Google. Depending on who you talk to, there's different versions. But at the end of the day, quantum is a leap in computing. >> Bill: Yes, it can be. >> It can be. It's still early days, it would be day zero. >> Yeah, well I think if you think of, we're about where computers were with tubes if you remember, if you go back that far, right, right? That's about where we are right now, where you got to kind of jiggle the tubes sometimes to get them running. >> A bug gets in there. Yeah, yeah, that bug can get in there, and all of those kind of things. >> Dave: You flip 'em off with a punch card. Yeah, yeah, so for example, a number of the machines, they run for four hours and then they come down for a half hour for calibration. And then they run for another four hours. So we're still sort of at that early stage, but you can do useful work on them. And more mature systems, like for example D-Wave, which is annealer, a little different than gate-based machines, is really quite mature, right? And so, I think as you go back and forth between these machines, the gate-based machines and annealers, you can really get a sense for what's capable today with Braket and that's what we want to do is get people to actually be able to try them out. Now, quantum supremacy is a fancy word for we did something you can't do on a classical computer, right? That's on a quantum computer for the first time. And quantum computers have the potential to exceed the processing power, especially on things like factoring and other things like that, or on Hamiltonian simulations for molecules, and those kids of things, because a quantum computer operates the way a molecule operates, right, in a lot of ways using quantum mechanics and things like that. And so, it's a fancy term for that. We don't really focus on that at Amazon. We focus on solving customer's problems. And the problem we're solving with Braket is to get them to learn it as it's evolving, and be ready for it, and continue to develop the environment. And then also offer a lot of choice. Amazon's always been big on choice. And if you look at our processing portfolio, we have AMD, Intel x86, great partners, great products from them. We have Nvidia, great partner, great products from them. But we also have our Graviton 1 and Graviton 2, and our new GPU-type chip. And those are great products, too, I've been doing a lot on those, as well. And the customer should have that choice, and with quantum computers, we're trying to do the same thing. We will have annealers, we will have ion trap machines, we will have electromagnetic machines, and others available on Braket. >> Can I ask a question on quantum if we can go back a bit? So you mentioned vacuum tubes, which was kind of funny. But the challenge there was with that, it was cooling and reliability, system downtime. What are the technical challenges with regard to quantum in terms of making it stable? >> Yeah, so some of it is on classical computers, as we call them, they have error-correction code built in. So you have, whether you know it or not, there's alpha particles that are flipping bits on your memory at all times, right? And if you don't have ECC, you'd get crashes constantly on your machine. And so, we've built in ECC, so we're trying to build the quantum computers with the proper error correction, right, to handle these things, 'cause nothing runs perfectly, you just think it's perfect because we're doing all the error correction under the covers, right? And so that needs to evolve on quantum computing. The ability to reproduce them in volume from an engineering perspective. Again, standard lithography has a yield rate, right? I mean, sometimes the yield is 40%, sometimes it's 20%, sometimes it's a really good fab and it's 80%, right? And so, you have a yield rate, as well. So, being able to do that. These machines also generally operate in a cryogenic world, that's a little bit more complicated, right? And they're also heavily affected by electromagnetic radiation, other things like that, so you have to sort of faraday cage them in some cases, and other things like that. So there's a lot that goes on there. So it's managing a physical environment like cryogenics is challenging to do well, having the fabrication to reproduce it in a new way is hard. The physics is actually, I shudder to say well understood. I would say the way the physics works is well understood, how it works is not, right? No one really knows how entanglement works, they just knows what it does, and that's understood really well, right? And so, so a lot of it is now, why we're excited about it, it's an engineering problem to solve, and we're pretty good at engineering. >> Talk about the practicality. Andy Jassy was on the record with me, quoted, said, "Quantum is very important to Amazon." >> Yes it is. >> You agree with that. He also said, "It's years out." You said that. He said, "But we want to make it practical "for customers." >> We do, we do. >> John: What is the practical thing? Is it just kicking the tires? Is it some of the things you mentioned? What's the core goal? >> So, in my opinion, we're at a point in the evolution of these quantum machines, and certainly with the work we're doing with Cal Tech and others, that the number of available cubits are starting to increase at an astronomic rate, a Moore's Law kind of of rate, right? Whether it's, no matter which machine you're looking at out there, and there's about 200 different companies building quantum computers now, and so, and they're all good technology. They've all got challenges, as well, as reproducibility, and those kind of things. And so now's a good time to start learning how to do this gate-based programming knowing that it's coming, because quantum computers, they won't replace a classical computer, so don't think that. Because there is no quantum ram, you can't run 200 petabytes of data through a quantum computer today, and those kind of things. What it can do is factoring very well, or it can do probability equations very well. It'll have affects on Monte Carlo simulations. It'll have affects specifically in material sciences where you can simulate molecules for the first time that you just can't do on classical computers. And when I say you can't do on classical computers, my quantum team always corrects me. They're like, "Well, no one has proven "that there's an algorithm you can run "on a classical computer that will do that yet," right? (men chuckle) So there may be times when you say, "Okay, I did this on a quantum computer," and you can only do it on a quantum computer. But then someone's very smart mathematician says, "Oh, I figured out how to do it on a regular computer. "You don't need a quantum computer for that." And that's constantly evolving, as well, in parallel, right? And so, and that's what's that argument between IBM and Google on quantum supremacy is that. And that's an unfortunate distraction in my opinion. What Google did was quite impressive, and if you're in the quantum world, you should be very happy with what they did. They had a very low error rate with a large number of cubits, and that's a big deal. >> Well, I just want to ask you, this industry is an arms race. But, with something like quantum where you've got 200 companies actually investing in it so early days, is collaboration maybe a model here? I mean, what do think? You mentioned Cal Tech. >> It certainly is for us because, like I said, we're going to have multiple quantum computers available, just like we collaborate with Intel, and AMD, and the other partners in that space, as well. That's sort of the nice thing about being a cloud service provider is we can give customers choice, and we can have our own innovation, plus their innovations available to customers, right? Innovation doesn't just happen in one place, right? We got a lot of smart people at Amazon, we don't invent everything, right? (Dave chuckles) >> So I got to ask you, obviously, we can take cube quantum and call it cubits, not to be confused with theCUBE video highlights. Joking aside, classical computers, will there be a classical cloud? Because this is kind of a futuristic-- >> Or you mean a quantum cloud? >> Quantum cloud, well then you get the classic cloud, you got the quantum cloud. >> Well no, they'll be together. So I think a quantum computer will be used like we used to use a math coprocessor if you like, or FPGAs are used today, right? So, you'll go along and you'll have your problem. And I'll give you a real, practical example. So let's say you had a machine with 125 cubits, okay? You could just start doing some really nice optimization algorithms on that. So imagine there's this company that ships stuff around a lot, I wonder who that could be? And they need to optimize continuously their delivery for a truck, right? And that changes all the time. Well that algorithm, if you're doing hundreds of deliveries in a truck, it's very complicated. That traveling salesman algorithm is a NP-hard problem when you do it, right? And so, what would be the fastest best path? But you got to take into account weather and traffic, so that's changing. So you might have a classical computer do those algorithms overnight for all the delivery trucks and then send them out to the trucks. The next morning they're driving around. But it takes a lot of computing power to do that, right? Well, a quantum computer can do that kind of problemistic or deterministic equation like that, not deterministic, a best-fit algorithm like that, much faster. And so, you could have it every second providing that. So your classical computer is sending out the manifests, interacting with the person, it's got the website on it. And then, it gets to the part where here's the problem to calculate, we call it a shot when you're on a quantum computer, it runs it in a few seconds that would take an hour or more. >> It's a fast job, yeah. >> And it comes right back with the result. And then it continues with it's thing, passes it to the driver. Another update occurs, (buzzing) and it's just going on all the time. So those kind of things are very practical and coming. >> I've got to ask for the younger generations, my sons super interested as I mentioned before you came on, quantum attracts the younger, smart kids coming into the workforce, engineering talent. What's the best path for someone who has an either advanced degree, or no degree, to get involved in quantum? Is there a certain advice you'd give someone? >> So the reality is, I mean, obviously having taken quantum mechanics in school and understanding the physics behind it to an extent, as much as you can understand the physics behind it, right? I think the other areas, there are programs at universities focused on quantum computing, there's a bunch of them. So, they can go into that direction. But even just regular computer science, or regular mechanical and electrical engineering are all neat. Mechanical around the cooling, and all that other stuff. Electrical, these are electrically-based machines, just like a classical computer is. And being able to code at low level is another area that's tremendously valuable right now. >> Got it. >> You mentioned best fit is coming, that use case. I mean, can you give us a sense of a timeframe? And people will say, "Oh, 10, 15, 20 years." But you're talking much sooner. >> Oh, I don't, I think it's sooner than that, I do. And it's hard for me to predict exactly when we'll have it. You can already do, with some of the annealing machines, like D- Wave, some of the best fit today, right? So it's a matter of people want to use a quantum computer because they need to do something fast, they don't care how much it costs, they need to do something fast. Or it's too expensive to do it on a classical computer, or you just can't do it at all on a classical computer. Today, there isn't much of that last one, you can't do it at all, but that's coming. As you get to around 52, 50, 52 cubits, it's very hard to simulate that on a classical computer. You're starting to reach the edge of what you can practically do on a classical computer. At about 125 cubits, you probably are at a point where you can't just simulate it anymore. >> But you're talking years, not decades, for this use case? >> Yeah, I think you're definitely talking years. I think, and you know, it's interesting, if you'd asked me two years ago how long it would take, I would've said decades. So that's how fast things are advancing right now, and I think that-- >> Yeah, and the computers just getting faster and faster. >> Yeah, but the ability to fabricate, the understanding, there's a number of architectures that are very well proven, it's just a matter of getting the error rates down, stability in place, the repeatable manufacturing in place, there's a lot of engineering problems. And engineering problems are good, we know how to do engineering problems, right? And we actually understand the physics, or at least we understand how the physics works. I won't claim that, what is it, "Spooky action at a distance," is what Einstein said for entanglement, right? And that's a core piece of this, right? And so, those are challenges, right? And that's part of the mystery of the quantum computer, I guess. >> So you're having fun? >> I am having fun, yeah. >> I mean, this is pretty intoxicating, technical problems, it's fun. >> It is. It is a lot of fun. Of course, the whole portfolio that I run over at AWS is just really a fun portfolio, between robotics, and autonomous systems, and IOT, and the advanced storage stuff that we do, and all the edge computing, and all the monitor and management systems, and all the real-time streaming. So like Kinesis Video, that's the back end for the Amazon ghost stores, and working with all that. It's a lot of fun, it really is, it's good. >> Well, Bill, we need an hour to get into that, so we may have to come up and see you, do a special story. >> Oh, definitely! >> We'd love to come up and dig in, and get a special feature program with you at some point. >> Yeah, happy to do that, happy to do that. >> Talk some robotics, some IOT, autonomous systems. >> Yeah, you can see all of it around here, we got it up and running around here, Dave. >> What a portfolio. >> Congratulations. >> Alright, thank you so much. >> Great news on the quantum. Quantum is here, quantum cloud is happening. Of course, theCUBE is going quantum. We've got a lot of cubits here. Lot of CUBE highlights, go to SiliconAngle.com. We got all the data here, we're sharing it with you. I'm John Furrier with Dave Vellante talking quantum. Want to give a shout out to Amazon Web Services and Intel for setting up this stage for us. Thanks to our sponsors, we wouldn't be able to make this happen if it wasn't for them. Thank you very much, and thanks for watching. We'll be back with more coverage after this short break. (upbeat music)
SUMMARY :
Brought to you by Amazon Web Services and Intel. It's so exciting, it gets bigger and more exciting. part of the quantum announcement that went out. Great to see you again. It's going to be the fastest thing in the world. You guys launched it. It provides for you the ability to do development And that gives you the ability to test them in parallel Depending on who you talk to, there's different versions. It's still early days, it would be day zero. we're about where computers were with tubes if you remember, can get in there, and all of those kind of things. And the problem we're solving with Braket But the challenge there was with that, And so that needs to evolve on quantum computing. Talk about the practicality. You agree with that. And when I say you can't do on classical computers, But, with something like quantum and the other partners in that space, as well. So I got to ask you, you get the classic cloud, you got the quantum cloud. here's the problem to calculate, we call it a shot and it's just going on all the time. quantum attracts the younger, smart kids And being able to code at low level is another area I mean, can you give us a sense of a timeframe? And it's hard for me to predict exactly when we'll have it. I think, and you know, it's interesting, Yeah, and the computers Yeah, but the ability to fabricate, the understanding, I mean, this is and the advanced storage stuff that we do, so we may have to come up and see you, and get a special feature program with you Yeah, happy to do that, Talk some robotics, some IOT, Yeah, you can see all of it We got all the data here, we're sharing it with you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
two machines | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cal Tech | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Bill | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Einstein | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Bill Vass | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
20% | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
200 companies | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two-bit | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
125 cubits | QUANTITY | 0.99+ |
200 petabytes | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
two different machines | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
two different technologies | QUANTITY | 0.99+ |
Eight years | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Monte Carlo | TITLE | 0.98+ |
today | DATE | 0.98+ |
two years ago | DATE | 0.98+ |
52 cubits | QUANTITY | 0.97+ |
Braket | ORGANIZATION | 0.97+ |
x86 | COMMERCIAL_ITEM | 0.97+ |
This year | DATE | 0.96+ |
next morning | DATE | 0.96+ |
about 125 cubits | QUANTITY | 0.95+ |
Graviton 1 | COMMERCIAL_ITEM | 0.95+ |
Dirac | ORGANIZATION | 0.95+ |
Graviton 2 | COMMERCIAL_ITEM | 0.94+ |
about 200 different companies | QUANTITY | 0.93+ |
three different quantum computers | QUANTITY | 0.93+ |
Moore's Law | TITLE | 0.91+ |
seventh year | QUANTITY | 0.9+ |
decades | QUANTITY | 0.87+ |
seconds | QUANTITY | 0.86+ |
every second | QUANTITY | 0.85+ |
re: | EVENT | 0.82+ |
half hour | QUANTITY | 0.81+ |
Tobi Knaup, D2iQ | D2iQ Journey to Cloud Native 2019
(informative tune) >> From San Francisco, it's The Cube. Covering D2 iQ. Brought to you by D2 iQ. (informative tune) >> Hey, welcome back everybody! Jeff Frick here with theCUBE. We're in downtown San Francisco at D2 iQ Headquarters, a beautiful office space here, right downtown. And we're talking about customers' journey to cloud data. We talk about it all the time, you hear about cloud native, everyone's rushing in, Kubernetes is the hottest thing since sliced bread, but the at the end of the day, you actually have to do it and we're really excited to talk to the founder who's been on his own company journey as he's watching his customers' company journeys and really kind of get into it a little bit. So, excited to have Tobi Knaup, he's a co-founder and CTO of D2 iQ. Tobi, great to see you! >> Thanks for having me. >> So, before we jump into the company and where you are now, I want to go back a little bit. I mean, looking through your resume, and your LinkedIn, etc. You're doing it kind of the classic dream-way for a founder. Did the Y Combinator thing, you've been at this for six years, you've changed the company a little bit. So, I wonder if you can just share form a founder's perspective, I think you've gone through four, five rounds of funding, raised a lot of money, 200 plus million dollars. As you sit back now, if you even get a chance, and kind of reflect, what goes through your head? As you've gone through this thing, pretty cool. A lot of people would like this, they think they'd like to be sitting in your seat. (chuckles) What can you share? >> Yeah, it's definitely been, you know, an exciting journey. And it's one that changes all the time. You know, we learned so many things over the years. And when you start out, you create a company, right? A tech company, you have you idea for the product, you have the technology. You know how to do that, right? You know how to iterate that and build it out. But there's many things you don't know as a technical founder with an engineering background, like myself. And so, I always joke with the team internally, this is that, you know, I basically try to fire myself every six months. And what I mean by that, is your role really changes, right? In the very beginning I wrote code and then is tarted managing engineers, when, you know, once you built up the team, then managed engineering managers and then did product and, you know. Nowadays, I spend a lot of time with customers to talk about our vision, you know, where I see the industry going, where things are going, how we fit into the greater picture. So, it's, you know, I think that's a big part of it, it's evolving with the company and, you know, learning the skills and evolving yourself. >> Right. It's just funny cause you think about tech founders and there's some big ones, right? Some big companies out there, to pick on Zuckerberg's, just to pick on him. But you know, when you start and kind of what your vision and your dream is and what you're coding in that early passion, isn't necessarily where you end up. And as you said, your role in more of a leadership position now, more of a guidance and setting strategy in communicating with the market, communicating with customers has changed. Has that been enjoyable for you, do you, you know, kind of enjoy more the, I don't want to say the elder states when you're a young guy, but more kind of that leadership role? Or just, you know, getting into the weeds and writing some code? >> Yeah. Yeah, what always excites me, is helping customers or helping people solve problems, right? And we do that with technology, in our case, but really it's about solving the problems. And the problems are not always technical problems, right? You know, the software that is at the core of our products, that's been running in production for many years and, you know, in some sense, what we did before we founded the company, when I worked at Airbnb and my co-founders worked at, you know, Airbnb and Twitter, we're still helping companies do those same things today. And so, where we need to help the most sometimes, it's actually on education, right? So, solving those problems. How do you train up, you know, a thousand or 10 thousand internal developers at a large organization, on what are containers, what is container management, cluster management, how does cloud native work? That's often the biggest challenge for folks and, you know, how did they transform their processes internally, how did they become really a cloud native organization. And so, you know, what motivates me is helping people solve problems in, whatever, you know, shape or form. >> Right >> It's funny because it's analogous to what you guys do, in that you got an open-source core, but people, I think, are often underestimate the degree of difficulty around all the activities beyond just the core software. >> Mm-hmm. >> Whether, as you said, it's training, it's implementation it's integration, it's best practices, it's support, it's connecting all these things together and staying on top of it. So, I think, you know, you're in a great position because it's not the software. That's not the hard part, that's arguably, the easy part. So, as you've watched people, you know, deal with this crazy acceleration of change in our industry and this rapid move to cloud native, you know, spawned by the success of the public clouds, you know, how do you kind of stay grounded and not jump too fast at the next shiny object, but still stay current, but still, you know, kind of keep to your kneading in terms of your foundation of the company and delivering real value for the customers? >> Yeah. Yeah, I know, it's exactly right. A lot of times, the challenges with adopting open-sourcing enterprise are, for example, around the skills, right? How do you hire a team that can manage that deployment and manage it for many years? Cause once software's introduced in an enterprise, it typically stays for a couple of years, right? And this gets especially challenging when you're using very popular open-source project, right? Because you're competing for those skills with, literally, everybody, right? A lot of folks want to deploy these things. And then, what people forget sometimes too is, so, a lot of the leading open-source projects, in the cloud native space, came out of, you know, big software companies, right? Kubernetes came from Google, Kafka came from LinkedIn, Cassandra from Facebook. And when those companies deploy these systems internally, they have a lot of other supporting infrastructure around it, right? And a lot of that is centered around day-two operations. Right? How do you monitor these things, how do you do lock management, how do you do do change management, how do you upgrade these things, keep current? So, all of that supporting infrastructure is what an enterprise also needs to develop in order to adopt open-source software and that's a big part of what we do. >> Right. So, I'd love to get your perspective. So, you said, you were at Airbnb, your founders were at Twitter. You know, often people, I think enterprises, fall into the trap of, you know, we want to be like the hyper-scale guys, you know. We want to be like Google or we want to be like Twitter. But they're not. But I'm sure there's a lot of lessons that you learned in watching the hyper-growth of Airbnb and Twitter. What are some of those ones that you can bring and hep enterprises with? What are some of the things that they should be aware of as, not necessarily maybe their sales don't ramp like those other companies, but their operations in some of these new cloud native things do? >> Right, right. Yeah, so, it's actually, you know, when we started the company, the key or one of the drivers was that, you know, we looked at the problems that we solved at Airbnb and Twitter and we realized that those problems are not specific to those two companies or, you know, Silicon Valley tech companies. We realized that most enterprises in the future will have, will be facing those problems. And a core one is really about agility and innovation. Right? Marc Andreessen, one of our early investors, said, "Software is eating the world." he wrote that up many years ago. And so, really what that means is that most enterprises, most companies on the planet, will transform into a software company. With all of that entails, right? With he agility that software brings. And, you know, if they don't do that, their competitors will transform into a software company and disrupt them. So, they need to become software companies. And so, a lot of the existing processes that these existing companies have around IT, don't work in that kind of environment, right? You just can't have a situation where, you know, a developer wants to deploy a new application that, you know, is very, you know, brings a lot of differentiation for the business, but the first thing they need to do in order to deploy that is file a ticket with IT and then someone will get to it in three months, right? That is a lot of waste of time and that's when people surpass you. So, that was one of the key-things we saw at Airbnb and Twitter, right? They were also in that old-school IT approach, where it took many months to deploy something. And deploying some of the software we work with, got that time down to even minutes, right? So it's empowering developers, right? And giving them the tools to make them agile so they can be innovative and bring the business forward. >> Right. The other big issue that enterprises have that you probably didn't have in some of those, you know, kind of native startups, is the complexity and the legacy. >> That's right. >> Right? So you've got all this old stuff that may or may not make any sense to redeploy, you've got stuff (laughing) stuff running in data centers, stuff running on public clouds, everybody wants to get the hyper-cloud to have a single point of view. So, it's a very different challenge when you're in the enterprises. What are you seeing, how are you helping them kind of navigate through that? >> Yeah, yeah. So, one of the first thongs we did actually, so, you know, most of our products are sort of open-core products. They have a lot of open-source at the center, but then, you know, we add enterprise components around that. Typically the first thing that shows up is around security, right? Putting the right access controls in place, making sure the traffic is encrypted. So, that's one of the first things. And then often, the companies we work with, are in a regulated environment, right? Banks, healthcare companies. So, we help them meet those requirements as well and often times that means, you know, adding features around the open-source products to get them to that. >> Right. So, like you said, the world has changed even in the six or seven years you've been at this. The, you know, containers, depending who you talk to, were around, not quite so hot. Docker's hot, Kubernetes is hot. But one of the big changes that's coming now, looking forward, is IOT and EDGE. So, you know, you just mentioned security, from the security point of view, you know, now you're tax services increased dramatically, we've done some work with Forescout and their secret sauce and they just put a sniffer on your network and find the hundreds and hundreds of devices (laughs)-- >> Yeah. >> That you don't even know are on your network. So do you look forward to kind of the opportunity and the challenges of IOT supported by 5G? What's that do for your business, where do you see opportunities, how are you going to address that? >> Yeah, so, I think IOT is really one of those big mega-trends that's going to transform a lot of things and create all kinds of new business models. And, really, what IOT is for me at the core, it's all around data, right? You have all these devices producing data, whether those are, you know, sensors in a factory in a production line, or those have, you know, cars on the road that send telemetry data in real time. IOT has been, you know, a big opportunity for us. We work with multiple customers that are in the space. And, you know, one fundamental problem with it is that, with IOT, a lot of the data that organizations need to process, are now, all of a sudden generated at the EDGE of the network, right? This wasn't the case many years for enterprises, right? Most of the data was generated, you know, at HQ or in some internal system, not at the EDGE of the network. And what always happens is when, with large-volume data is, compute generally moves where the data is and not the other way around. So, for many of these deployments, it's not efficient to move all that data from those IT devices to a central-cloud location or data-center location. So, those companies need to find ways to process data at the EDGE. That's a big part of what we're helping them with, it's automating real-time data services and machine-learning services, at the EDGE, where the EDGE can be, you know, factories all around the world, it could be cruise ships, it could be other types of locations where working with customers. And so, essentially what we're doing is we're bringing the automation that people are used to from the public cloud to the EDGE. So, you know, with the click of a button or a single command you can install a database or a machine-learning system or a message queue at all those EDGE locations. And then, it's not just that stuff is being deployed at the EDGE, I think the, you know, the standard type of infrastructure-mix, for most enterprises, is a hybrid one. I think most organizations will run a mix of EDGE, their data centers and typically multiple public cloud providers. And so, they really need a platform where they can manage applications across all of those environments and well, that's big value that our products bring. >> Yeah. I was at a talk the other day with a senior exec, formerly from Intel, and they thought that it's going to level out at probably 50-50, you know, kind of cloud-based versus on-prem. And that's just going to be the way it is cause it's just some workloads you just can't move. So, exciting stuff, so, what as you... I can't believe we're coming to the end of 2019, which is amazing to me. As you look forward to 2020 and beyond, what are some of your top priorities? >> Yeah, so, one of my top priorities is really, around machine-learning. I think machine-learning is one of these things that, you know, it's really a general-purpose tool. It's like a hammer, you can solve a lot of problems with it. And, you know, besides doing infrastructure and large-scale infrastructure, machine-learning has, you know, always been sort of my second baby. Did a lot of work during grad-school and at Airbnb. And so, we're seeing more and more customers adopt machine-learning to do all kinds of interesting, you know, problems like predictive maintenance in a factory where, you know, every minute of downtime costs a lot of money. But, machine-learning is such a new space, that a lot of the best practices that we know from software engineering and from running software into production, those same things don't always exist in machine-learning. And so, what I am looking at is, you know, what can we take from what we learned running production software, what can we take and move over to machine-learning to help people run these models in production and you know, where can we deploy machine-learning in our products too, internally, to make them smarter and automate them even more. >> That's interesting because the machine-learning and AI, you know, there's kind of the tools and stuff, and then there's the application of the tools. And we're seeing a lot of activity around, you know, people using ML in a specific application to drive better performances. As you just said,-- >> Mm-hmm. >> You could do it internally. >> Do you see an open-source play in machine-learning, in AI? Do you see, you know, kind of open-source algorithms? Do you see, you know, a lot of kind of open-source ecosystem develop around some of this stuff? So, just like I don't have time to learn data science, I won't necessarily have to have my own algorithms. How do you see that,-- >> Yeah. >> You know, kind of open-source meets AI and ML, of all things? >> Yeah. It's a space I think about a lot and what's really great, I think is that we're seeing a lot of the open-source, you know, best-practice that we know from software, actually, move over to machine-learning. I think it's interesting, right? Deep-learning is all the rage right now, everybody wants to do deep-learning, deep-learning networks. The theory behind deep-networks is actually, you know, pretty old. It's from the '70s and 80's. But for a long time, we dint have that much, enough compute-power to really use deep-learning in a meaningful way. We do have that now, but it's still expensive. So, you know, to get cutting edge results on image recognition or other types of ML problems, you need to spend a lot of money on infrastructure. It's tens of thousands or hundreds of thousands of dollars to train a model. So, it's not accessible to everyone. But, the great news is that, much like in software engineering, we can use these open-source libraries and combine them together and build upon them. There is, you know, we have that same kind of composability in machine-learning, using techniques like transfer-learning. And so, you can actually already see some, you know, open-community hubs spinning up, where people publish models that you can just take, they're pre-trained. You can take them and you know, just adjust them to your particular use case. >> Right. >> So, I think a lot of that is translating over. >> And even though it's expensive today, it's not going to be expensive tomorrow, right? >> Mm-hhm. >> I mean, if you look through the world in a lens, with, you know, the price of compute-store networking asymptotically approaching zero in the not-to-distant future and think about how you attack problems that way, that's a very different approach. And sure enough, I mean, some might argue that Moore's Law's done, but kind of the relentless march of Moore's Law types of performance increase it's not done, it's not necessarily just doubling up of transistors anymore >> Right >> So, I think there's huge opportunity to apply these things a lot of different places. >> Yeah, yeah. Absolutely. >> Can be an exciting future. >> Absolutely! (laughs) >> Tobi, congrats on all your successes! A really fun success story, we continue to like watching the ride and thanks for spending the few minutes with us. >> Thank you very much! >> All right. He's Tobi, I'm Jeff, you're watching The Cube, we're at D2 iQ Headquarters downtown in San Francisco. Thanks for watching, we'll catch you next time! (electric chime)
SUMMARY :
Brought to you by but the at the end of the day, you actually have to do it So, before we jump into the company and where you are now, to talk about our vision, you know, But you know, when you start And so, you know, what motivates me It's funny because it's analogous to what you guys do, and this rapid move to cloud native, you know, came out of, you know, big software companies, right? fall into the trap of, you know, the key or one of the drivers was that, you know, you know, kind of native startups, What are you seeing, how are you helping them and often times that means, you know, from the security point of view, you know, That you don't even know are on your network. Most of the data was generated, you know, at probably 50-50, you know, And so, what I am looking at is, you know, And we're seeing a lot of activity around, you know, Do you see, you know, a lot of kind of that we're seeing a lot of the open-source, you know, with, you know, the price of compute-store networking So, I think there's huge opportunity Yeah, yeah. and thanks for spending the few minutes with us. Thanks for watching, we'll catch you next time!
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Marc Andreessen | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Tobi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
six years | QUANTITY | 0.99+ |
Tobi Knaup | PERSON | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
Zuckerberg | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
second baby | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
first thing | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.98+ |
EDGE | ORGANIZATION | 0.98+ |
five rounds | QUANTITY | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
D2 iQ | ORGANIZATION | 0.98+ |
The Cube | TITLE | 0.97+ |
three months | QUANTITY | 0.97+ |
200 plus million dollars | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.97+ |
Silicon Valley | LOCATION | 0.96+ |
first | QUANTITY | 0.96+ |
a thousand | QUANTITY | 0.95+ |
'70s | DATE | 0.95+ |
D2iQ Journey to Cloud Native | TITLE | 0.95+ |
50-50 | QUANTITY | 0.94+ |
end of 2019 | DATE | 0.94+ |
80's | DATE | 0.94+ |
single point | QUANTITY | 0.92+ |
hundreds of thousands of dollars | QUANTITY | 0.92+ |
four | QUANTITY | 0.92+ |
first things | QUANTITY | 0.9+ |
D2 iQ Headquarters | LOCATION | 0.89+ |
10 thousand internal developers | QUANTITY | 0.87+ |
Forescout | ORGANIZATION | 0.85+ |
hundreds and | QUANTITY | 0.85+ |
Kubernetes | PERSON | 0.84+ |
single command | QUANTITY | 0.83+ |
years ago | DATE | 0.82+ |
Moore's Law | TITLE | 0.79+ |
theCUBE | ORGANIZATION | 0.79+ |
six months | QUANTITY | 0.79+ |
2019 | DATE | 0.78+ |
two | QUANTITY | 0.76+ |
zero | QUANTITY | 0.76+ |
5G | ORGANIZATION | 0.75+ |
devices | QUANTITY | 0.72+ |
Moore's | TITLE | 0.66+ |
Kafka | TITLE | 0.64+ |
couple of years | QUANTITY | 0.62+ |
Prasad Sankaran & Larry Socher, Accenture Technology | Accenture Cloud Innovation Day
>> Hey, welcome back. Your body, Jefe Rick here from the Cube were high atop San Francisco in the century innovation hub. It's in the middle of the Salesforce Tower. It's a beautiful facility. They think you had it. The grand opening about six months ago. We're here for the grand opening. Very cool space. I got maker studios. They've got all kinds of crazy stuff going on. But we're here today to talk about Cloud in this continuing evolution about cloud in the enterprise and hybrid cloud and multi cloud in Public Cloud and Private Cloud. And we're really excited to have a couple of guys who really helping customers make this journey, cause it's really tough to do by yourself. CEOs are super busy. There were about security and all kinds of other things, so centers, often a trusted partner. We got two of the leaders from center joining us today's Prasad Sankaran. He's the senior managing director of Intelligent Cloud infrastructure for Center Welcome and Larry Soccer, the global managing director. Intelligent cloud infrastructure offering from central gentlemen. Welcome. I love it. It intelligent cloud. What is an intelligent cloud all about? Got it in your title. It must mean something pretty significant. >> Yeah, I think First of all, thank you for having us, but yeah, absolutely. Everything's around becoming more intelligent around using more automation. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. All of our clients are moving. So it's all about bringing the intelligence not only into infrastructure, but also into cloud generally. And it's all driven by software, >> right? It's just funny to think where we are in this journey. We talked a little bit before we turn the cameras on and there you made an interesting comment when I said, You know, when did this cloud for the Enterprise start? And you took it back to sass based applications, which, >> you know you were sitting in the sales force builder. >> That's true. It isn't just the tallest building in >> everyone's, you know, everyone's got a lot of focus on AWS is rise, etcetera. But the real start was really getting into sass. I mean, I remember we used to do a lot of Siebel deployments for CR M, and we started to pivot to sales, for some were moving from remedy into service now. I mean, we've went through on premise collaboration, email thio 3 65 So So we've actually been at it for quite a while in the particularly the SAS world. And it's only more recently that we started to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. But But this journey started, you know, it was that 78 years ago that we really started. See some scale around it. >> And I think and tell me if you agree, I think really, what? The sales forces of the world and and the service now is of the world office 3 65 kind of broke down some of those initial beers, which are all really about security and security, security, security, Always to hear where now security is actually probably an attributes and loud can brink. >> Absolutely. In fact, I mean, those barriers took years to bring down. I still saw clients where they were forcing salesforce tor service Now to put, you know, instances on prime and I think I think they finally woke up toe. You know, these guys invested ton in their security organizations. You know there's a little of that needle in the haystack. You know, if you breach a data set, you know what you're getting after. But when Europe into sales force, it's a lot harder. And so you know. So I think that security problems have certainly gone away. We still have some compliance, regulatory things, data sovereignty. But I think security and not not that it sold by any means that you know, it's always giving an ongoing problem. But I think they're getting more comfortable with their data being up in the in the public domain, right? Not public. >> And I think it also helped them with their progress towards getting cloud native. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, and you did some level of custom development around it. And now I think that's paved the way for more complex applications and different workloads now going into, you know, the public cloud and the private cloud. But that's the next part of the journey, >> right? So let's back up 1/2 a step, because then, as you said, a bunch of stuff then went into public cloud, right? Everyone's putting in AWS and Google. Um, IBM has got a public how there was a lot more. They're not quite so many as there used to be, Um, but then we ran into a whole new host of issues, right, which is kind of opened up this hybrid cloud. This multi cloud world, which is you just can't put everything into a public clouds. There's certain attributes is that you need to think about and yet from the application point of view before you decide where you deploy that. So I'm just curious. If you can share now, would you guys do with clients? How should they think about applications? How should they think about what to deploy where I think >> I'll start in? The military has a lot of expertise in this area. I think you know, we have to obviously start from an application centric perspective. You go to take a look at you know where your applications have to live water. What are some of the data implications on the applications, or do you have by way of regulatory and compliance issues, or do you have to do as faras performance because certain applications have to be in a high performance environment. Certain other applications don't think a lot of these factors will. Then Dr where these applications need to recite and then what we think in today's world is really accomplish. Complex, um, situation where you have a lot of legacy. But you also have private as well as public cloud. So you approach it from an application perspective. >> Yeah. I mean, if you really take a look at Army, you look at it centers clients, and we were totally focused on up into the market Global 2000 savory. You know how clients typically have application portfolios ranging from 520,000 applications? And really, I mean, if you think about the purpose of cloud or even infrastructure for that, they're there to serve the applications. No one cares if your cloud infrastructure is not performing the absolute. So we start off with an application monetization approach and ultimately looking, you know, you know, with our tech advisory guys coming in, there are intelligent engineering service is to do the cloud native and at mod work our platforms, guys, who do you know everything from sales forward through ASAP. They should drive a strategy on how those applications gonna evolve with its 520,000 and determined hey, and usually using some, like the six orders methodology. And I'm I am I going to retire this Am I going to retain it? And, you know, I'm gonna replace it with sass. Am I gonna re factor in format? And it's ultimately that strategy that's really gonna dictate a multi and, you know, every cloud story. So it's based on the applications data, gravity issues where they gonna reside on their requirements around regulatory, the requirements for performance, etcetera. That will then dictate the cloud strategies. I'm you know, not a big fan of going in there and just doing a multi hybrid cloud strategy without a really good up front application portfolio approach, right? How we gonna modernize that >> it had. And how do you segment? That's a lot of applications. And you know, how do you know the old thing? How do you know that one by that time, how do you help them pray or size where they should be focusing on us? >> So typically what we do is work with our clients to do a full application portfolio analysis, and then we're able to then segment the applications based on, you know, important to the business and some of the factors that both of us mentioned. And once we have that, then we come up with an approach where certain sets of applications he moved to sass certain other applications you move to pass. So you know, you're basically doing the re factoring and the modernization and then certain others you know, you can just, you know, lift and shift. So it's really a combination off both modernization as well as migration. It's a combination off that, but to do that, you have to initially look at the entire set of applications and come up with that approach. >> I'm just curious where within that application assessment, um, where is cost savings? Where is, uh, this is just old. And where is opportunities to innovate faster? Because we know a lot of lot of talk really. Days has cost savings, but what the real advantages is execution speed if you can get it. If >> you could go back through four years and we had there was a lot of CEO discussions around cost savings, I'm not really have seen our clients shift. It costs never goes away, obviously right. But there's a lot greater emphasis now on business agility. You know, howto innovate faster, get getting your capabilities to market faster, to change my customer experience. So So it's really I t is really trying to step up and, you know, enabled the business toe to compete in the marketplace. We're seeing a huge shift in emphasis or focus at least starting with, you know, how'd I get better business agility outta leverage to cloud and cloud native development to get their upper service levels? Actually, we started seeing increase on Hey, you know, these applications need to work. It's actress. So So Obviously, cost still remains a factor, but we seem much more for, you know, much more emphasis on agility, you know, enabling the business on, given the right service levels of right experience to the user, little customers. Big pivot there, >> Okay. And let's get the definitions out because you know a lot of lot of conversation about public clouds, easy private clouds, easy but hybrid cloud and multi cloud and confusion about what those are. How do you guys define him? How do you help your customers think about the definition? Yes, >> I think it's a really good point. So what we're starting to see is there were a lot of different definitions out there. But I think as I talked more clients and our partners, I think we're all starting to, you know, come to ah, you know, the same kind of definition on multi cloud. It's really about using more than one cloud. But hybrid, I think, is a very important concept because hybrid is really all about the placement off the workload or where your application is going to run on. And then again, it goes to all of these points that we talked about data, gravity and performance and other things. Other factors. But it's really all about where do you place the specific look >> if you look at that, so if you think about public, I mean obviously gives us the innovation of the public providers. You look at how fast Amazon comes out with new versions of Lambda etcetera. So that's the innovations there obviously agility. You could spend up environments very quickly, which is, you know, one of the big benefits of it. The consumption, economic models. So that is the number of drivers that are pushing in the direction of public. You know, on the private side, they're still it's quite a few benefits that don't get talked about as much. Um, so you know, if you look at it, um, performance if you think the public world, you know, Although they're scaling up larger T shirts, et cetera, they're still trying to do that for a large array of applications on the private side, you can really Taylor somethingto very high performance characteristics. Whether it's you know, 30 to 64 terabyte Hana, you can get a much more focused precision environment for business. Critical workloads like that article, article rack, the Duke clusters, everything about fraud analysis. So that's a big part of it. Related to that is the data gravity that Prasad just mentioned. You know, if I've got a 64 terabyte Hana database you know, sitting in my private cloud, it may not be that convenient to go and put get that data shared up in red shift or in Google's tensorflow. So So there's some data gravity out. Networks just aren't there. The laden sea of moving that stuff around is a big issue. And then a lot of people of investments in their data centers. I mean, the other piece, that's interesting. His legacy, you know, you know, as we start to look at the world a lot, there's a ton of code still living in, You know, whether it's you, nick system, just IBM mainframes. There's a lot of business value there, and sometimes the business cases aren't aren't necessarily there toe to replace them. Right? And in world of digital, the decoupling where I can start to use micro service is we're seeing a lot of trends. We worked with one hotel to take their reservation system. You know, Rapid and Micro Service is, um, we then didn't you know, open shift couch base, front end. And now, when you go against, you know, when you go and browsing properties, you're looking at rates you actually going into distributed database cash on, you know, in using the latest cloud native technologies that could be dropped every two weeks or everything three or four days for my mobile application. And it's only when it goes, you know, when the transaction goes back, to reserve the room that it goes back there. So we're seeing a lot of power with digital decoupling, But we still need to take advantage of, you know, we've got these legacy applications. So So the data centers air really were trying to evolve them. And really, just, you know, how do we learn everything from the world of public and struck to bring those saints similar type efficiencies to the to the world of private? And really, what we're seeing is this emerging approach where I can start to take advantage of the innovation cycles. The land is that, you know, the red shifts the functions of the public world, but then maybe keep some of my more business critical regulated workloads. You know, that's the other side of the private side, right? I've got G X p compliance. If I've got hip, a data that I need to worry about GDP are there, you know, the whole set of regular two requirements. Now, over time, we do anticipate the public guys will get much better and more compliant. In fact, they made great headway already, but they're still not a number of clients are still, you know, not 100% comfortable from my client's perspective. >> Gotta meet Teresa Carlson. She'll change him, runs that AWS public sector is doing amazing things, obviously with big government contracts. But but you raise real inching point later. You almost described what I would say is really a hybrid application in this in this hotel example that you use because it's is, you know, kind of breaking the application and leveraging micro service is to do things around the core that allowed to take advantage of some this agility and hyper fast development, yet still maintain that core stuff that either doesn't need to move. Works fine, be too expensive. Drea Factor. It's a real different weight. Even think about workloads and applications into breaking those things into bits. >> And we see that pattern all over the place. I'm gonna give you the hotel Example Where? But finance, you know, look at financial service. Is retail banking so open banking a lot. All those rito applications are on the mainframe. I'm insurance claims and and you look at it the business value of replicating a lot of like the regulatory stuff, the locality stuff. It doesn't make sense to write it. There's no rule inherent business values of I can wrap it, expose it and in a micro service's architecture now D'oh cloud native front end. That's gonna give me a 360 view a customer, Change the customer experience. You know, I've got a much you know, I can still get that agility. The innovation cycles by public. Bye bye. Wrapping my legacy environment >> and percent you raided, jump in and I'll give you something to react to, Which is which is the single planet glass right now? How do I How did I manage all this stuff now? Not only do I have distributed infrastructure now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single pane of glass. Everybody wants to be the app that's upon everybody. Screen. How are you seeing people deal with the management complexity of these kind of distributed infrastructures? If you will Yeah, >> I think that that's that's an area that's, ah, actually very topical these days because, you know, you're starting to see more and more workers go to private cloud. And so you've got a hybrid infrastructure you're starting to see move movement from just using the EMS to, you know, cantinas and Cuba needs. And, you know, we talked about Serval s and so on. So all of our clients are looking for a way, and you have different types of users as well. Yeah, developers. You have data scientists. You have, you know, operators and so on. So they're all looking for that control plane that allows them access and a view toe everything that is out there that is being used in the enterprise. And that's where I think you know, a company like Accenture were able to use the best of breed toe provide that visibility to our clients, >> right? Yeah. I mean, you hit the nail on the head. It's becoming, you know, with all the promises, cloud and all the power. And these new architectures is becoming much more dynamic, ephemeral, with containers and kubernetes with service computing that that that one application for the hotel, they're actually started in. They've got some, actually, now running a native us of their containers and looking at surveillance. So you're gonna even a single application can span that. And one of things we've seen is is first, you know, a lot of our clients used to look at, you know, application management, you know, different from their their infrastructure. And the lines are now getting very blurry. You need to have very tight alignment. You take that single application, if any my public side goes down or my mid tier with my you know, you know, open shipped on VM, where it goes down on my back and mainframe goes down. Or the networks that connected to go down the devices that talk to it. It's a very well. Despite the power, it's a very complex environment. So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, Application Service's teams that do that Application manager, an optimization cloud infrastructure. How do we get better alignment that are embedded security, You know, how do you know what are managed to security service is bringing those together. And then what we did was we looked at, you know, we got very aggressive with cloud for a strategy and, you know, how do we manage the world of public? But when looking at the public providers of hyper scale, er's and how they hit Incredible degrees of automation. We really looked at, said and said, Hey, look, you gotta operate differently in this new world. What can we learn from how the public guys we're doing that We came up with this concept. We call it running different. You know, how do you operate differently in this new multi speed? You know, you know, hot, very hybrid world across public, private demon, legacy, environment, and start a look and say, OK, what is it that they do? You know, first they standardize, and that's one of the big challenges you know, going to almost all of our clients in this a sprawl. And you know, whether it's application sprawl, its infrastructure, sprawl >> and my business is so unique. The Larry no business out there has the same process that way. So >> we started make you know how to be standardized like center hybrid cloud solution important with hp envy And where we how do we that was an example of so we can get to you because you can't automate unless you standardise. So that was the first thing you know, standardizing our service catalog. Standardizing that, um you know, the next thing is the operating model. They obviously operate differently. So we've been putting a lot of time and energy and what I call a cloud and agile operating model. And also a big part of that is truly you hear a lot about Dev ops right now. But truly putting the security and and operations into Deb said cops are bringing, you know, the development in the operations much tied together. So spending a lot of time looking at that and transforming operations re Skilling the people you know, the operators of the future aren't eyes on glass there. Developers, they're writing the data ingestion, the analytic algorithms, you know, to do predictive operations. They're riding the automation script to take work, you know, test work out right. And over time they'll be tuning the aye aye engines to really optimize environment. And then finally, has Prasad alluded to Is that the platforms that control planes? That doing that? So, you know what we've been doing is we've had a significant investments in the eccentric cloud platform, our infrastructure automation platforms, and then the application teams with it with my wizard framework, and we started to bring that together you know, it's an integrated control plane that can plug into our clients environments to really manage seamlessly, you know, and provide. You know, it's automation. Analytics. Aye, aye. Across APS, cloud infrastructure and even security. Right. And that, you know, that really is a I ops, right? I mean, that's delivering on, you know, as the industry starts toe define and really coalesce around, eh? I ops. That's what we you A ups. >> So just so I'm clear that so it's really your layer your software layer kind of management layer that that integrates all these different systems and provides kind of a unified view. Control? Aye, aye. Reporting et cetera. Right? >> Exactly. Then can plug in and integrate, you know, third party tools to do straight functions. >> I'm just I'm just curious is one of the themes that we here out in the press right now is this is this kind of pull back of public cloud app, something we're coming back. Or maybe it was, you know, kind of a rush. Maybe a little bit too aggressively. What are some of the reasons why people are pulling stuff back out of public clouds that just with the wrong. It was just the wrong application. The costs were not what we anticipated to be. We find it, you know, what are some of the reasons that you see after coming back in house? Yeah, I think it's >> a variety of factors. I mean, it's certainly cost, I think is one. So as there are multiple private options and you know, we don't talk about this, but the hyper skills themselves are coming out with their own different private options like an tars and out pulls an actor stack and on. And Ali Baba has obsessed I and so on. So you see a proliferation of that, then you see many more options around around private cloud. So I think the cost is certainly a factor. The second is I think data gravity is, I think, a very important point because as you're starting to see how different applications have to work together, then that becomes a very important point. The third is just about compliance, and, you know, the regulatory environment. As we look across the globe, even outside the U. S. We look at Europe and other parts of Asia as clients and moving more to the cloud. You know that becomes an important factor. So as you start to balance these things, I think you have to take a very application centric view. You see some of those some some maps moving back, and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private cloud and then tomorrow you can move this. Since it's been containerized to run on public and it's, you know, it's all managed. That left >> E. I mean, cost is a big factor if you actually look at it. Most of our clients, you know, they typically you were a big cap ex businesses, and all of a sudden they're using this consumption, you know, consumption model. And they went, really, they didn't have a function to go and look at be thousands or millions of lines of it, right? You know, as your statement Exactly. I think they misjudged, you know, some of the scale on Do you know e? I mean, that's one of the reasons we started. It's got to be an application led, you know, modernization, that really that will dictate that. And I think In many cases, people didn't. May not have thought Through which application. What data? There The data, gravity data. Gravity's a conversation I'm having just by with every client right now. And if I've got a 64 terabyte Hana and that's the core, my crown jewels that data, you know, how do I get that to tensorflow? How'd I get that? >> Right? But if Andy was here, though, and he would say we'll send down the stove, the snow came from which virgin snow plows? Snowball Snowball. Well, they're snowballs. But I have seen the whole truck killer that comes out and he'd say, Take that and stick it in the cloud. Because if you've got that data in a single source right now, you can apply multitude of applications across that thing. So they, you know, they're pushing. Get that date end in this single source. Of course. Then to move it, change it. You know, you run into all these micro lines of billing statement, take >> the hotel. I mean, their data stolen the mainframe, so if they anyone need to expose it, Yeah, they have a database cash, and they move it out, You know, particulars of data sets get larger, it becomes, you know, the data. Gravity becomes a big issue because no matter how much you know, while Moore's Law might be might have elongated from 18 to 24 months, the network will always be the bottle Mac. So ultimately, we're seeing, you know, a CZ. We proliferate more and more data, all data sets get bigger and better. The network becomes more of a bottleneck. And that's a It's a lot of times you gotta look at your applications. They have. I've got some legacy database I need to get Thio. I need this to be approximately somewhere where I don't have, you know, high bandwith. Oh, all right. Or, you know, highlight and see type. Also, egress costs a pretty big deals. My date is up in the cloud, and I'm gonna get charged for pulling it off. You know, that's being a big issue, >> you know, it's funny, I think, and I think a lot of the the issue, obviously complexity building. It's a totally from building model, but I think to a lot of people will put stuff in a public cloud and then operated as if they bought it and they're running in the data center in this kind of this. Turn it on, Turn it off when you need it. Everyone turns. Everyone loves to talk about the example turning it on when you need it. But nobody ever talks about turning it off when you don't. But it kind of close on our conversation. I won't talk about a I and applied a Iot because he has a lot of talk in the market place. But, hey, I'm machine learning. But as you guys know pride better than anybody, it's the application of a I and specific applications, which really on unlocks the value. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I in a management layer like your run differently, set up to actually know when to turn things on, when to turn things off when you moved in but not moved, it's gonna have to be machines running that right cause the data sets and the complexity of these systems is going to be just overwhelming. Yeah, yeah, >> absolutely. Completely agree with you. In fact, attack sensual. We actually refer to this whole area as applied intelligence on That's our guy, right? And it is absolutely to add more and more automation move everything Maur toe where it's being run by the machine rather than you know, having people really working on these things >> yet, e I mean, if you think you hit the nail on the head, we're gonna a eyes e. I mean, given how things getting complex, more ephemeral, you think about kubernetes et cetera. We're gonna have to leverage a humans or not to be able to get, you know, manage this. The environments comported right. What's interesting way we've used quite effectively for quite some time. But it's good at some stuff, not good at others. So we find it's very good at, like, ticket triage, like ticket triage, chicken rounding et cetera. You know, any time we take over account, we tune our AI ai engines. We have ticket advisers, etcetera. That's what probably got the most, you know, most bang for the buck. We tried in the network space, less success to start even with, you know, commercial products that were out there. I think where a I ultimately bails us out of this is if you look at the problem. You know, a lot of times we talked about optimizing around cost, but then performance. I mean, and it's they they're somewhat, you know, you gotta weigh him off each other. So you've got a very multi dimensional problem on howto I optimize my workloads, particularly. I gotta kubernetes cluster and something on Amazon, you know, sums running on my private cloud, etcetera. So we're gonna get some very complex environment. And the only way you're gonna be ableto optimize across multi dimensions that cost performance service levels, you know, And then multiple options don't do it public private, You know, what's my network costs etcetera. Isn't a I engine tuning that ai ai engines? So ultimately, I mean, you heard me earlier on the operators. I think you know, they write the analytic albums, they do the automation scripts, but they're the ultimate one too. Then tune the aye aye engines that will manage our environment. And I think it kubernetes will be interesting because it becomes a link to the control plane optimize workload placement. You know, between >> when the best thing to you, then you have dynamic optimization. Could you might be optimizing eggs at us right now. But you might be optimizing for output the next day. So exists really a you know, kind of Ah, never ending when you got me. They got to see them >> together with you and multi dimension. Optimization is very difficult. So I mean, you know, humans can't get their head around. Machines can, but they need to be trained. >> Well, Prasad, Larry, Lots of great opportunities for for centuries bring that expertise to the tables. So thanks for taking a few minutes to walk through some of these things. Our pleasure. Thank you, Grace. Besides Larry, I'm Jeff. You're watching the Cube. We are high above San Francisco in the Salesforce Tower, Theis Center, Innovation hub in San Francisco. Thanks for watching. We'll see you next time.
SUMMARY :
They think you had it. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. And you took it back It isn't just the tallest building in to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. And I think and tell me if you agree, I think really, what? and not not that it sold by any means that you know, it's always giving an ongoing problem. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, There's certain attributes is that you need to think about and yet from the application point of view before I think you know, we have to obviously start from an application centric perspective. you know, you know, with our tech advisory guys coming in, there are intelligent engineering And you know, So you know, you're basically doing the re factoring and the modernization and then certain is execution speed if you can get it. So So it's really I t is really trying to step up and, you know, enabled the business toe How do you help your customers think about the definition? you know, come to ah, you know, the same kind of definition on multi cloud. And it's only when it goes, you know, when the transaction goes back, is, you know, kind of breaking the application and leveraging micro service is to do things around the core You know, I've got a much you know, I can still get that agility. now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single And that's where I think you know, So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, So So that was the first thing you know, standardizing our service catalog. So just so I'm clear that so it's really your layer your software layer kind Then can plug in and integrate, you know, third party tools to do straight functions. We find it, you know, what are some of the reasons and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private It's got to be an application led, you know, modernization, that really that will dictate that. So they, you know, they're pushing. So ultimately, we're seeing, you know, a CZ. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I add more and more automation move everything Maur toe where it's being run by the machine rather than you I think you know, they write the analytic albums, they do the automation scripts, So exists really a you know, kind of Ah, So I mean, you know, We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Larry | PERSON | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Prasad Sankaran | PERSON | 0.99+ |
Prasad | PERSON | 0.99+ |
Larry Soccer | PERSON | 0.99+ |
Grace | PERSON | 0.99+ |
Prasad Sankaran | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
millions | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Larry Socher | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Deb | PERSON | 0.99+ |
Jefe Rick | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
18 | QUANTITY | 0.99+ |
520,000 applications | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
four days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
78 years ago | DATE | 0.98+ |
third | QUANTITY | 0.98+ |
single source | QUANTITY | 0.98+ |
U. S. | LOCATION | 0.98+ |
64 terabyte | QUANTITY | 0.98+ |
one application | QUANTITY | 0.97+ |
two requirements | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
360 view | QUANTITY | 0.97+ |
520,000 | QUANTITY | 0.96+ |
single application | QUANTITY | 0.96+ |
more than one cloud | QUANTITY | 0.96+ |
Siebel | ORGANIZATION | 0.96+ |
six orders | QUANTITY | 0.96+ |
one hotel | QUANTITY | 0.95+ |
Intelligent Cloud | ORGANIZATION | 0.95+ |
egress | ORGANIZATION | 0.95+ |
Salesforce Tower | LOCATION | 0.94+ |
second | QUANTITY | 0.94+ |
Ali Baba | PERSON | 0.93+ |
Serval | ORGANIZATION | 0.93+ |
3 65 | OTHER | 0.93+ |
Theis Center | LOCATION | 0.92+ |
Cuba | LOCATION | 0.92+ |
single pane | QUANTITY | 0.92+ |
single | QUANTITY | 0.92+ |
Moore's Law | TITLE | 0.91+ |
every two weeks | QUANTITY | 0.9+ |
hp envy | ORGANIZATION | 0.89+ |
SAS | ORGANIZATION | 0.89+ |
Cube | ORGANIZATION | 0.88+ |
Duke | ORGANIZATION | 0.87+ |
about six months ago | DATE | 0.87+ |
Global 2000 | ORGANIZATION | 0.85+ |
Center | ORGANIZATION | 0.84+ |
Accenture Technology | ORGANIZATION | 0.82+ |
cloud | ORGANIZATION | 0.81+ |
Prasad Sankaran & Larry Socher, Accenture Technology | Accenture Innovation Day
>> Hey, welcome back. Your body, Jefe Rick here from the Cube were high atop San Francisco in the century innovation hub. It's in the middle of the Salesforce Tower. It's a beautiful facility. They think you had it. The grand opening about six months ago. We're here for the grand opening. Very cool space. I got maker studios. They've got all kinds of crazy stuff going on. But we're here today to talk about Cloud in this continuing evolution about cloud in the enterprise and hybrid cloud and multi cloud in Public Cloud and Private Cloud. And we're really excited to have a couple of guys who really helping customers make this journey, cause it's really tough to do by yourself. CEOs are super busy. There were about security and all kinds of other things, so centers, often a trusted partner. We got two of the leaders from center joining us today's Prasad Sankaran. He's the senior managing director of Intelligent Cloud infrastructure for Center Welcome and Larry Soccer, the global managing director. Intelligent cloud infrastructure offering from central gentlemen. Welcome. I love it. It intelligent cloud. What is an intelligent cloud all about? Got it in your title. It must mean something pretty significant. >> Yeah, I think First of all, thank you for having us, but yeah, absolutely. Everything's around becoming more intelligent around using more automation. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. All of our clients are moving. So it's all about bringing the intelligence not only into infrastructure, but also into cloud generally. And it's all driven by software, >> right? It's just funny to think where we are in this journey. We talked a little bit before we turn the cameras on and there you made an interesting comment when I said, You know, when did this cloud for the Enterprise start? And you took it back to sass based applications, which, >> you know you were sitting in the sales force builder. >> That's true. It isn't just the tallest building in >> everyone's, you know, everyone's got a lot of focus on AWS is rise, etcetera. But the real start was really getting into sass. I mean, I remember we used to do a lot of Siebel deployments for CR M, and we started to pivot to sales, for some were moving from remedy into service now. I mean, we've went through on premise collaboration, email thio 3 65 So So we've actually been at it for quite a while in the particularly the SAS world. And it's only more recently that we started to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. But But this journey started, you know, it was that 78 years ago that we really started. See some scale around it. >> And I think and tell me if you agree, I think really, what? The sales forces of the world and and the service now is of the world office 3 65 kind of broke down some of those initial beers, which are all really about security and security, security, security, Always to hear where now security is actually probably an attributes and loud can brink. >> Absolutely. In fact, I mean, those barriers took years to bring down. I still saw clients where they were forcing salesforce tor service Now to put, you know, instances on prime and I think I think they finally woke up toe. You know, these guys invested ton in their security organizations. You know there's a little of that needle in the haystack. You know, if you breach a data set, you know what you're getting after. But when Europe into sales force, it's a lot harder. And so you know. So I think that security problems have certainly gone away. We still have some compliance, regulatory things, data sovereignty. But I think security and not not that it sold by any means that you know, it's always giving an ongoing problem. But I think they're getting more comfortable with their data being up in the in the public domain, right? Not public. >> And I think it also helped them with their progress towards getting cloud native. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, and you did some level of custom development around it. And now I think that's paved the way for more complex applications and different workloads now going into, you know, the public cloud and the private cloud. But that's the next part of the journey, >> right? So let's back up 1/2 a step, because then, as you said, a bunch of stuff then went into public cloud, right? Everyone's putting in AWS and Google. Um, IBM has got a public how there was a lot more. They're not quite so many as there used to be, Um, but then we ran into a whole new host of issues, right, which is kind of opened up this hybrid cloud. This multi cloud world, which is you just can't put everything into a public clouds. There's certain attributes is that you need to think about and yet from the application point of view before you decide where you deploy that. So I'm just curious. If you can share now, would you guys do with clients? How should they think about applications? How should they think about what to deploy where I >> think I'll start in? The military has a lot of expertise in this area. I think you know, we have to obviously start from an application centric perspective. You go to take a look at you know where your applications have to live water. What are some of the data implications on the applications, or do you have by way of regulatory and compliance issues, or do you have to do as faras performance because certain applications have to be in a high performance environment. Certain other applications don't think a lot of these factors will. Then Dr where these applications need to recite and then what we think in today's world is really accomplish. Complex, um, situation where you have a lot of legacy. But you also have private as well as public cloud. So you approach it from an application perspective. >> Yeah. I mean, if you really take a look at Army, you look at it centers clients, and we were totally focused on up into the market Global 2000 savory. You know how clients typically have application portfolios ranging from 520,000 applications? And really, I mean, if you think about the purpose of cloud or even infrastructure for that, they're there to serve the applications. No one cares if your cloud infrastructure is not performing the absolute. So we start off with an application monetization approach and ultimately looking, you know, you know, with our tech advisory guys coming in, there are intelligent engineering service is to do the cloud native and at mod work our platforms, guys, who do you know everything from sales forward through ASAP. They should drive a strategy on how those applications gonna evolve with its 520,000 and determined hey, and usually using some, like the six orders methodology. And I'm I am I going to retire this Am I going to retain it? And, you know, I'm gonna replace it with sass. Am I gonna re factor in format? And it's ultimately that strategy that's really gonna dictate a multi and, you know, every cloud story. So it's based on the applications data, gravity issues where they gonna reside on their requirements around regulatory, the requirements for performance, etcetera. That will then dictate the cloud strategies. I'm you know, not a big fan of going in there and just doing a multi hybrid cloud strategy without a really good up front application portfolio approach, right? How we gonna modernize that >> it had. And how do you segment? That's a lot of applications. And you know, how do you know the old thing? How do you know that one by that time, how do you help them pray or size where they should be focusing on us? >> So typically what we do is work with our clients to do a full application portfolio analysis, and then we're able to then segment the applications based on, you know, important to the business and some of the factors that both of us mentioned. And once we have that, then we come up with an approach where certain sets of applications he moved to sass certain other applications you move to pass. So you know, you're basically doing the re factoring and the modernization and then certain others you know, you can just, you know, lift and shift. So it's really a combination off both modernization as well as migration. It's a combination off that, but to do that, you have to initially look at the entire set of applications and come up with that approach. >> I'm just curious where within that application assessment, um, where is cost savings? Where is, uh, this is just old. And where is opportunities to innovate faster? Because we know a lot of lot of talk really. Days has cost savings, but what the real advantages is execution speed if you can get it. If >> you could go back through four years and we had there was a lot of CEO discussions around cost savings, I'm not really have seen our clients shift. It costs never goes away, obviously right. But there's a lot greater emphasis now on business agility. You know, howto innovate faster, get getting your capabilities to market faster, to change my customer experience. So So it's really I t is really trying to step up and, you know, enabled the business toe to compete in the marketplace. We're seeing a huge shift in emphasis or focus at least starting with, you know, how'd I get better business agility outta leverage to cloud and cloud native development to get their upper service levels? Actually, we started seeing increase on Hey, you know, these applications need to work. It's actress. So So Obviously, cost still remains a factor, but we seem much more for, you know, much more emphasis on agility, you know, enabling the business on, given the right service levels of right experience to the user, little customers. Big pivot there, >> Okay. And let's get the definitions out because you know a lot of lot of conversation about public clouds, easy private clouds, easy but hybrid cloud and multi cloud and confusion about what those are. How do you guys define him? How do you help your customers think about the definition? Yes, >> I think it's a really good point. So what we're starting to see is there were a lot of different definitions out there. But I think as I talked more clients and our partners, I think we're all starting to, you know, come to ah, you know, the same kind of definition on multi cloud. It's really about using more than one cloud. But hybrid, I think, is a very important concept because hybrid is really all about the placement off the workload or where your application is going to run on. And then again, it goes to all of these points that we talked about data, gravity and performance and other things. Other factors. But it's really all about where do you place the specific look >> if you look at that, so if you think about public, I mean obviously gives us the innovation of the public providers. You look at how fast Amazon comes out with new versions of Lambda etcetera. So that's the innovations there obviously agility. You could spend up environments very quickly, which is, you know, one of the big benefits of it. The consumption, economic models. So that is the number of drivers that are pushing in the direction of public. You know, on the private side, they're still it's quite a few benefits that don't get talked about as much. Um, so you know, if you look at it, um, performance if you think the public world, you know, Although they're scaling up larger T shirts, et cetera, they're still trying to do that for a large array of applications on the private side, you can really Taylor somethingto very high performance characteristics. Whether it's you know, 30 to 64 terabyte Hana, you can get a much more focused precision environment for business. Critical workloads like that article, article rack, the Duke clusters, everything about fraud analysis. So that's a big part of it. Related to that is the data gravity that Prasad just mentioned. You know, if I've got a 64 terabyte Hana database you know, sitting in my private cloud, it may not be that convenient to go and put get that data shared up in red shift or in Google's tensorflow. So So there's some data gravity out. Networks just aren't there. The laden sea of moving that stuff around is a big issue. And then a lot of people of investments in their data centers. I mean, the other piece, that's interesting. His legacy, you know, you know, as we start to look at the world a lot, there's a ton of code still living in, You know, whether it's you, nick system, just IBM mainframes. There's a lot of business value there, and sometimes the business cases aren't aren't necessarily there toe to replace them. Right? And in world of digital, the decoupling where I can start to use micro service is we're seeing a lot of trends. We worked with one hotel to take their reservation system. You know, Rapid and Micro Service is, um, we then didn't you know, open shift couch base, front end. And now, when you go against, you know, when you go and browsing properties, you're looking at rates you actually going into distributed database cash on, you know, in using the latest cloud native technologies that could be dropped every two weeks or everything three or four days for my mobile application. And it's only when it goes, you know, when the transaction goes back, to reserve the room that it goes back there. So we're seeing a lot of power with digital decoupling, But we still need to take advantage of, you know, we've got these legacy applications. So So the data centers air really were trying to evolve them. And really, just, you know, how do we learn everything from the world of public and struck to bring those saints similar type efficiencies to the to the world of private? And really, what we're seeing is this emerging approach where I can start to take advantage of the innovation cycles. The land is that, you know, the red shifts the functions of the public world, but then maybe keep some of my more business critical regulated workloads. You know, that's the other side of the private side, right? I've got G X p compliance. If I've got hip, a data that I need to worry about GDP are there, you know, the whole set of regular two requirements. Now, over time, we do anticipate the public guys will get much better and more compliant. In fact, they made great headway already, but they're still not a number of clients are still, you know, not 100% comfortable from my client's perspective. >> Gotta meet Teresa Carlson. She'll change him, runs that AWS public sector is doing amazing things, obviously with big government contracts. But but you raise real inching point later. You almost described what I would say is really a hybrid application in this in this hotel example that you use because it's is, you know, kind of breaking the application and leveraging micro service is to do things around the core that allowed to take advantage of some this agility and hyper fast development, yet still maintain that core stuff that either doesn't need to move. Works fine, be too expensive. Drea Factor. It's a real different weight. Even think about workloads and applications into breaking those things into bits. >> And we see that pattern all over the place. I'm gonna give you the hotel Example Where? But finance, you know, look at financial service. Is retail banking so open banking a lot. All those rito applications are on the mainframe. I'm insurance claims and and you look at it the business value of replicating a lot of like the regulatory stuff, the locality stuff. It doesn't make sense to write it. There's no rule inherent business values of I can wrap it, expose it and in a micro service's architecture now D'oh cloud native front end. That's gonna give me a 360 view a customer, Change the customer experience. You know, I've got a much you know, I can still get that agility. The innovation cycles by public. Bye bye. Wrapping my legacy environment >> and percent you raided, jump in and I'll give you something to react to, Which is which is the single planet glass right now? How do I How did I manage all this stuff now? Not only do I have distributed infrastructure now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single pane of glass. Everybody wants to be the app that's upon everybody. Screen. How are you seeing people deal with the management complexity of these kind of distributed infrastructures? If you >> will Yeah, I think that that's that's an area that's, ah, actually very topical these days because, you know, you're starting to see more and more workers go to private cloud. And so you've got a hybrid infrastructure you're starting to see move movement from just using the EMS to, you know, cantinas and Cuba needs. And, you know, we talked about Serval s and so on. So all of our clients are looking for a way, and you have different types of users as well. Yeah, developers. You have data scientists. You have, you know, operators and so on. So they're all looking for that control plane that allows them access and a view toe everything that is out there that is being used in the enterprise. And that's where I think you know, a company like Accenture were able to use the best of breed toe provide that visibility to our clients, >> right? Yeah. I mean, you hit the nail on the head. It's becoming, you know, with all the promises, cloud and all the power. And these new architectures is becoming much more dynamic, ephemeral, with containers and kubernetes with service computing that that that one application for the hotel, they're actually started in. They've got some, actually, now running a native us of their containers and looking at surveillance. So you're gonna even a single application can span that. And one of things we've seen is is first, you know, a lot of our clients used to look at, you know, application management, you know, different from their their infrastructure. And the lines are now getting very blurry. You need to have very tight alignment. You take that single application, if any my public side goes down or my mid tier with my you know, you know, open shipped on VM, where it goes down on my back and mainframe goes down. Or the networks that connected to go down the devices that talk to it. It's a very well. Despite the power, it's a very complex environment. So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, Application Service's teams that do that Application manager, an optimization cloud infrastructure. How do we get better alignment that are embedded security, You know, how do you know what are managed to security service is bringing those together. And then what we did was we looked at, you know, we got very aggressive with cloud for a strategy and, you know, how do we manage the world of public? But when looking at the public providers of hyper scale, er's and how they hit Incredible degrees of automation. We really looked at, said and said, Hey, look, you gotta operate differently in this new world. What can we learn from how the public guys we're doing that We came up with this concept. We call it running different. You know, how do you operate differently in this new multi speed? You know, you know, hot, very hybrid world across public, private demon, legacy, environment, and start a look and say, OK, what is it that they do? You know, first they standardize, and that's one of the big challenges you know, going to almost all of our clients in this a sprawl. And you know, whether it's application sprawl, its infrastructure, sprawl >> and my business is so unique. The Larry no business out there has the same process that way. So >> we started make you know how to be standardized like center hybrid cloud solution important with hp envy And where we how do we that was an example of so we can get to you because you can't automate unless you standardise. So that was the first thing you know, standardizing our service catalog. Standardizing that, um you know, the next thing is the operating model. They obviously operate differently. So we've been putting a lot of time and energy and what I call a cloud and agile operating model. And also a big part of that is truly you hear a lot about Dev ops right now. But truly putting the security and and operations into Deb said cops are bringing, you know, the development in the operations much tied together. So spending a lot of time looking at that and transforming operations re Skilling the people you know, the operators of the future aren't eyes on glass there. Developers, they're writing the data ingestion, the analytic algorithms, you know, to do predictive operations. They're riding the automation script to take work, you know, test work out right. And over time they'll be tuning the aye aye engines to really optimize environment. And then finally, has Prasad alluded to Is that the platforms that control planes? That doing that? So, you know what we've been doing is we've had a significant investments in the eccentric cloud platform, our infrastructure automation platforms, and then the application teams with it with my wizard framework, and we started to bring that together you know, it's an integrated control plane that can plug into our clients environments to really manage seamlessly, you know, and provide. You know, it's automation. Analytics. Aye, aye. Across APS, cloud infrastructure and even security. Right. And that, you know, that really is a I ops, right? I mean, that's delivering on, you know, as the industry starts toe define and really coalesce around, eh? I ops. That's what we you A ups. >> So just so I'm clear that so it's really your layer your software layer kind of management layer that that integrates all these different systems and provides kind of a unified view. Control? Aye, aye. Reporting et cetera. Right? >> Exactly. Then can plug in and integrate, you know, third party tools to do straight functions. >> I'm just I'm just curious is one of the themes that we here out in the press right now is this is this kind of pull back of public cloud app, something we're coming back. Or maybe it was, you know, kind of a rush. Maybe a little bit too aggressively. What are some of the reasons why people are pulling stuff back out of public clouds that just with the wrong. It was just the wrong application. The costs were not what we anticipated to be. We find it, you know, what are some of the reasons that you see after coming back in house? Yeah, I think it's >> a variety of factors. I mean, it's certainly cost, I think is one. So as there are multiple private options and you know, we don't talk about this, but the hyper skills themselves are coming out with their own different private options like an tars and out pulls an actor stack and on. And Ali Baba has obsessed I and so on. So you see a proliferation of that, then you see many more options around around private cloud. So I think the cost is certainly a factor. The second is I think data gravity is, I think, a very important point because as you're starting to see how different applications have to work together, then that becomes a very important point. The third is just about compliance, and, you know, the regulatory environment. As we look across the globe, even outside the U. S. We look at Europe and other parts of Asia as clients and moving more to the cloud. You know that becomes an important factor. So as you start to balance these things, I think you have to take a very application centric view. You see some of those some some maps moving back, and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private cloud and then tomorrow you can move this. Since it's been containerized to run on public and it's, you know, it's all managed. That >> left E. I mean, cost is a big factor if you actually look at it. Most of our clients, you know, they typically you were a big cap ex businesses, and all of a sudden they're using this consumption, you know, consumption model. And they went, really, they didn't have a function to go and look at be thousands or millions of lines of it, right? You know, as your statement Exactly. I think they misjudged, you know, some of the scale on Do you know e? I mean, that's one of the reasons we started. It's got to be an application led, you know, modernization, that really that will dictate that. And I think In many cases, people didn't. May not have thought Through which application. What data? There The data, gravity data. Gravity's a conversation I'm having just by with every client right now. And if I've got a 64 terabyte Hana and that's the core, my crown jewels that data, you know, how do I get that to tensorflow? How'd I get that? >> Right? But if Andy was here, though, and he would say we'll send down the stove, the snow came from which virgin snow plows? Snowball Snowball. Well, they're snowballs. But I have seen the whole truck killer that comes out and he'd say, Take that and stick it in the cloud. Because if you've got that data in a single source right now, you can apply multitude of applications across that thing. So they, you know, they're pushing. Get that date end in this single source. Of course. Then to move it, change it. You know, you run into all these micro lines of billing statement, take >> the hotel. I mean, their data stolen the mainframe, so if they anyone need to expose it, Yeah, they have a database cash, and they move it out, You know, particulars of data sets get larger, it becomes, you know, the data. Gravity becomes a big issue because no matter how much you know, while Moore's Law might be might have elongated from 18 to 24 months, the network will always be the bottle Mac. So ultimately, we're seeing, you know, a CZ. We proliferate more and more data, all data sets get bigger and better. The network becomes more of a bottleneck. And that's a It's a lot of times you gotta look at your applications. They have. I've got some legacy database I need to get Thio. I need this to be approximately somewhere where I don't have, you know, high bandwith. Oh, all right. Or, you know, highlight and see type. Also, egress costs a pretty big deals. My date is up in the cloud, and I'm gonna get charged for pulling it off. You know, that's being a big issue, >> you know, it's funny, I think, and I think a lot of the the issue, obviously complexity building. It's a totally from building model, but I think to a lot of people will put stuff in a public cloud and then operated as if they bought it and they're running in the data center in this kind of this. Turn it on, Turn it off when you need it. Everyone turns. Everyone loves to talk about the example turning it on when you need it. But nobody ever talks about turning it off when you don't. But it kind of close on our conversation. I won't talk about a I and applied a Iot because he has a lot of talk in the market place. But, hey, I'm machine learning. But as you guys know pride better than anybody, it's the application of a I and specific applications, which really on unlocks the value. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I in a management layer like your run differently, set up to actually know when to turn things on, when to turn things off when you moved in but not moved, it's gonna have to be machines running that right cause the data sets and the complexity of these systems is going to be just overwhelming. >> Yeah, yeah, absolutely. Completely agree with you. In fact, attack sensual. We actually refer to this whole area as applied intelligence on That's our guy, right? And it is absolutely to add more and more automation move everything Maur toe where it's being run by the machine rather than you know, having people really working on these things >> yet, e I mean, if you think you hit the nail on the head, we're gonna a eyes e. I mean, given how things getting complex, more ephemeral, you think about kubernetes et cetera. We're gonna have to leverage a humans or not to be able to get, you know, manage this. The environments comported right. What's interesting way we've used quite effectively for quite some time. But it's good at some stuff, not good at others. So we find it's very good at, like, ticket triage, like ticket triage, chicken rounding et cetera. You know, any time we take over account, we tune our AI ai engines. We have ticket advisers, etcetera. That's what probably got the most, you know, most bang for the buck. We tried in the network space, less success to start even with, you know, commercial products that were out there. I think where a I ultimately bails us out of this is if you look at the problem. You know, a lot of times we talked about optimizing around cost, but then performance. I mean, and it's they they're somewhat, you know, you gotta weigh him off each other. So you've got a very multi dimensional problem on howto I optimize my workloads, particularly. I gotta kubernetes cluster and something on Amazon, you know, sums running on my private cloud, etcetera. So we're gonna get some very complex environment. And the only way you're gonna be ableto optimize across multi dimensions that cost performance service levels, you know, And then multiple options don't do it public private, You know, what's my network costs etcetera. Isn't a I engine tuning that ai ai engines? So ultimately, I mean, you heard me earlier on the operators. I think you know, they write the analytic albums, they do the automation scripts, but they're the ultimate one too. Then tune the aye aye engines that will manage our environment. And I think it kubernetes will be interesting because it becomes a link to the control plane optimize workload placement. You know, between >> when the best thing to you, then you have dynamic optimization. Could you might be optimizing eggs at us right now. But you might be optimizing for output the next day. So exists really a you know, kind of Ah, never ending when you got me. They got to see them >> together with you and multi dimension. Optimization is very difficult. So I mean, you know, humans can't get their head around. Machines can, but they need to be trained. >> Well, Prasad, Larry, Lots of great opportunities for for centuries bring that expertise to the tables. So thanks for taking a few minutes to walk through some of these things. Our pleasure. Thank you, Grace. Besides Larry, I'm Jeff. You're watching the Cube. We are high above San Francisco in the Salesforce Tower, Theis Center, Innovation hub in San Francisco. Thanks for watching. We'll see you next time.
SUMMARY :
They think you had it. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. And you took it back It isn't just the tallest building in to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. And I think and tell me if you agree, I think really, what? and not not that it sold by any means that you know, it's always giving an ongoing problem. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, There's certain attributes is that you need to think about and yet from the application point of view before I think you know, we have to obviously start from an application centric you know, you know, with our tech advisory guys coming in, there are intelligent engineering And you know, and then we're able to then segment the applications based on, you know, important to the business is execution speed if you can get it. So So it's really I t is really trying to step up and, you know, enabled the business toe How do you help your customers think about the definition? you know, come to ah, you know, the same kind of definition on multi cloud. And it's only when it goes, you know, when the transaction goes back, is, you know, kind of breaking the application and leveraging micro service is to do things around the core You know, I've got a much you know, I can still get that agility. now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single And that's where I think you know, a company like Accenture were able to use So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, So the analytic algorithms, you know, to do predictive operations. So just so I'm clear that so it's really your layer your software layer kind Then can plug in and integrate, you know, third party tools to do straight functions. We find it, you know, what are some of the reasons and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private It's got to be an application led, you know, modernization, that really that will dictate that. So they, you know, they're pushing. So ultimately, we're seeing, you know, a CZ. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I by the machine rather than you know, having people really working on these things I think you know, they write the analytic albums, they do the automation scripts, So exists really a you know, kind of Ah, So I mean, you know, We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Larry | PERSON | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Prasad Sankaran | PERSON | 0.99+ |
Prasad | PERSON | 0.99+ |
Larry Soccer | PERSON | 0.99+ |
Grace | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
millions | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Prasad Sankaran | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Deb | PERSON | 0.99+ |
Jefe Rick | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
18 | QUANTITY | 0.99+ |
520,000 applications | QUANTITY | 0.99+ |
four days | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
78 years ago | DATE | 0.98+ |
Larry Socher | PERSON | 0.98+ |
third | QUANTITY | 0.98+ |
single source | QUANTITY | 0.98+ |
U. S. | LOCATION | 0.98+ |
64 terabyte | QUANTITY | 0.98+ |
one application | QUANTITY | 0.97+ |
two requirements | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
360 view | QUANTITY | 0.97+ |
Accenture Technology | ORGANIZATION | 0.97+ |
520,000 | QUANTITY | 0.96+ |
single application | QUANTITY | 0.96+ |
more than one cloud | QUANTITY | 0.96+ |
Siebel | ORGANIZATION | 0.96+ |
six orders | QUANTITY | 0.96+ |
one hotel | QUANTITY | 0.95+ |
Intelligent Cloud | ORGANIZATION | 0.95+ |
egress | ORGANIZATION | 0.95+ |
Salesforce Tower | LOCATION | 0.94+ |
second | QUANTITY | 0.94+ |
Ali Baba | PERSON | 0.93+ |
Serval | ORGANIZATION | 0.93+ |
3 65 | OTHER | 0.93+ |
Cuba | LOCATION | 0.92+ |
Theis Center | LOCATION | 0.92+ |
single pane | QUANTITY | 0.92+ |
single | QUANTITY | 0.92+ |
Moore's Law | TITLE | 0.91+ |
every two weeks | QUANTITY | 0.9+ |
hp envy | ORGANIZATION | 0.89+ |
SAS | ORGANIZATION | 0.89+ |
Cube | ORGANIZATION | 0.88+ |
Duke | ORGANIZATION | 0.87+ |
about six months ago | DATE | 0.87+ |
Global 2000 | ORGANIZATION | 0.85+ |
Center | ORGANIZATION | 0.84+ |
cloud | ORGANIZATION | 0.81+ |
Steve Randich, FINRA | AWS Summit New York 2019
>> live from New York. It's the Q covering AWS Global Summit 2019 brought to you by Amazon Web service, is >> welcome back here in New York City on stew Minimum. My co host is Corey Quinn. In the keynote this morning, Warner Vogel's made some new announcements what they're doing and also brought out a couple of customers who are local and really thrilled and excited to have on the program the C i O and E V P from Finn Ra here in New York City. Steve Randall, thanks so much for joining us. You're welcome. Thank you. All right, so, you know, quite impressive. You know when when I say one of those misunderstood words out there to talk about scale and you talk about speed and you know, you were you know, I'm taking so many notes in your keynote this 1 500,000 compute note. Seven terabytes worth of new data daily with half a trillion validation checks per day, some pretty impressive scale, and therefore, you know, it's I t is not the organ that kind of sits in the basement, and the business doesn't think about it business and I t need to be in lobster. So, you know, I think most people are familiar with in Rome. But maybe give us the kind of bumper sticker as Thio What dinner is today and you know, the >> the organization. Yeah, I started it Fender and 2013. I thought I was gonna come into a typical regulator, which is, as you alluded to technologies, kind of in the basement. Not very important, not strategic. And I realized very quickly two things. Number one, The team was absolutely talented. A lot of the people that we've got on her team came from start ups and other technology companies. Atypical financial service is and the second thing is we had a major big data challenge on our hands. And so the decision to go to the cloud S I started in March 2013. By July of that year, I was already having dialogue with our board of directors about having to go to the cloud in orderto handle the data. >> Yeah, so you know, big data was supposed to be that bit flip that turned that. Oh, my God. I have so much data to Oh, yea, I can monetize and do things with their data. So give us a little bit of that, That data journey And what? That that you talk about the flywheel? The fact that you've got inside Finneran. >> Yeah. So we knew that we needed the way were running at that time on data warehouse appliances from E, M. C. And IBM. And which a data warehouse appliance. You go back 10 15 years. That was where big data was running. But those machines are vertically scalable, and when you hit the top of the scale, then you've got to buy another bigger one, which might not be available. So public cloud computing is all about horizontal scale at commodity prices to things that those those data data warehouse appliance didn't have. They were vertical and proprietary, inexpensive. And so the key thing was to come up to select the cloud vendor between Google, IBM, You know, the usual suspects and architect our applications properly so that we wouldn't be overly vendor dependent on the cloud provider and locked in if you will, and that we could have flexibility to use commodity software. So we standardized in conjunction with our move to the public cloud on open source software, which we continue today. So no proprietary software for the most part running in the cloud. And we were just very smart about architect ing our systems at that point in time to make sure that those opportunities prevailed. And the other thing I would say, this kind of the secret of our success Is it because we were such early adopters we were in the financial service industry and a regulator toe boots that we had engineering access to the cloud providers and the big, big date open source software vendors. So we actually had the engineers from eight of us and other firms coming in to help us learn how to do it, to do it right. And that's been part of our culture ever since. >> One thing that was, I guess a very welcome surprise is normally these keynotes tend to fall into almost reductive tropes where first, we're gonna have some Twitter for pet style start up talking about all the higher level stuff they're doing, and then we're gonna have a large, more serious company. Come in and talk about how we moved of'em from our data center into the cloud gay Everyone clap instead, there was it was very clear. You're using higher level, much higher level service is on top of the cloud provider. It's not just running the M somewhere else in the same way you would on premise. Was that a transitional step that you went through or did you effectively when you went all in, start leveraging those higher service is >> okay. It's a great question. And ah, differentiator for us versus a lot. A lot of the large organizations with a legacy footprint that would not be practical to rewrite. We had outsourced I t entirely in the nineties E T s and it was brought back in source in in house early in this decade. And so we had kind of a fresh, fresh environment. Fresh people, no legacy, really other than the data warehouse appliances. So we had a spring a springboard to rewrite our abs in an agile way to be fully cloud enabled. So we work with eight of us. We work with Cloudera. We work with port works with all the key vendors at that time and space to figure out how to write Ah wraps so they could take most advantage of what the cloud was offering at that time. And that continues to prevail today. >> That that's a great point because, you know so often it's that journey to cloud. But it's that application modernization, that journey. Right. So bring us in little inside there is. You know how it is. You know, what expertise did Finn Ra have there? I mean, you don't want to be building applications. It is the open stuff source. The things wasn't mature enough. How much did they have toe help work, you know, Would you call it? You know, collaboration? >> Yeah. The first year was hard because I would have, you know, every high performance database vendor, and I see a number of them here today. I'm sure they're paddling their AWS version now, but they had a a private, proprietary database version. They're saying if you want to handle the volumes that you're seeing and predicting you really need a proprietary, they wouldn't call it proprietary. But it was essentially ah, very unique solution point solution that would cause vendor dependency. And so and then and then my architects internally, we're saying, No way, Wanna go open source because that's where the innovation and evolution is gonna be fastest. And we're not gonna have vendor Lock in that decision that that took about a year to solidify. But once we went that way, we never looked back. So from that standpoint, that was a good bad, and it made sense. The other element of your question is, how How much of this did we do on our own, rely on vendors again? The kind of dirty little secret of our beginnings here is that we ll average the engineer, you know, So typically a firm would get the sales staff, right. We got the engineers we insisted on in orderto have them teach our engineers how to do these re architectures to do it right. Um and we use that because we're in the financial service industry as a regulator, right? So they viewed us as a reference herbal account that would be very valuable in their portfolio. So in many regards, that was way scratch each other's back. But ultimately, the point isn't that their engineers trained our engineers who trained other engineers. And so when I when I did the, uh um keynote at the reinvented 2016 sixteen one of my pillars of our success was way didn't rely overly on vendors. In the end, we trained 2016 1 5 to 600 of our own staff on how to do cloud architectures correctly. >> I think at this point it's very clear that you're something of an extreme outlier in that you integrate by the nature of what you do with very large financial institutions. And these historically have not been firms that have embraced the cloud with speed and enthusiasm that Fenner has. Have you found yourself as you're going in this all in on the cloud approach that you're having trouble getting some of those other larger financial firms to meet you there, or is that not really been a concern based upon fenders position with an ecosystem? >> Um, I would say that five years ago, very rare, I would say, You know, we've had a I made a conscious effort to be very loud in the process of conferences about our journey because it has helped us track talent. People are coming to work for us as a senior financial service. The regulator that wouldn't have considered it five years ago, and they're doing it because they want to be part of this experience that we're having, but it's a byproduct of being loud, and the press means that a lot of firms are saying, Well, look what Fender is doing in the cloud Let's go talk to them So we've had probably at this 50.200 firms that have come defender toe learn from our experience. We've got this two hour presentation that kind of goes through all the aspects of how to do it right, what, what to avoid, etcetera, etcetera. And, um, you know, I would say now the company's air coming into us almost universally believe it's the right direction. They're having trouble, whether it's political issues, technology dat, you name it for making the mo mentum that we've made. But unlike 45 years ago, all of them recognize that it's it's the direction to go. That's almost undisputed at this point. And you're opening comment. Yeah, we're very much an outlier. We've moved 97 plus percent of our APS 99 plus percent of our data. We are I mean, the only thing that hasn't really been moved to the cloud at this point our conscious decisions, because those applications that are gonna die on the vine in the data center or they don't make sense to move to the cloud for whatever reason. >> Okay, You've got almost all your data in the cloud and you're using open source technology. Is Cory said if I was listening to a traditional financial service company, you know, they're telling me all the reasons that for governance and compliance that they're not going to do it. So you know, why do you feel safe putting your your data in the cloud? >> Uh, well, we've looked at it. So, um, I spent my first year of Finn run 2013 early, 2014 but mostly 2013. Convincing our board of directors that moving our most critical applications to the public cloud was going to be no worse from the information security standpoint than what we're doing in our private data centers. That presentation ultimately made it to other regulators, major firms on the street industry, lobbyist groups like sifma nephi. AP got a lot of air time, and it basically made the point using logic and reasoning, that going to the cloud and doing it right not doing it wrong, but doing it right is at least is secure from a physical logical standpoint is what we were previously doing. And then we went down that route. I got the board approval in 2015. We started looking at it and realizing, Wait a minute, what we're doing here encrypting everything, using micro segmentation, we would never. And I aren't doing this in our private data center. It's more secure. And at that point in time, a lot of the analysts in our industry, like Gardner Forrester, started coming out with papers that basically said, Hey, wait a minute, this perception the cloud is not as safe is on Prem. That's wrong. And now we look at it like I can't imagine doing what we're doing now in a private data center. There's no scale. It's not a secure, etcetera, etcetera. >> And to some extent, when you're dealing with banks and start a perspective now and they say, Oh, we don't necessarily trust the cloud. Well, that's interesting. Your regulator does. In other cases, some tax authorities do. You provided tremendous value just by being as public as you have been that really starts taking the wind out of the sails of the old fear uncertainty and doubt. Arguments around cloud. >> Yeah, I mean, doubts around. It's not secure. I don't have control over it. If you do it right, those are those are manageable risks, I would argue. In some cases, you've got more risk not doing it. But I will caution everything needs to be on the condition that you've got to do it right. Sloppy migration in the cloud could make you less secure. So there there are principles that need to be followed as part of >> this. So Steve doing it right. You haven't been sitting still. One of the things that really caught my attention in the keynote was you said the last four years you've done three re architectures and what I want. Understand? You said each time you got a better price performance, you know, you do think so. How do you make sure you do it right? Yet have flexibility both in an architect standpoint, and, you know, don't you have to do a three year reserves intense for some of these? How do you make sure you have the flexibility to be able to take advantage of you? Said the innovation in automation. >> Yeah. Keep moving forward with. That's Ah, that's a deep technical question. So I'm gonna answer it simply and say that we've architected the software and hardware stack such. There's not a lot of co dependency between them, and that's natural. I t. One on one principle, but it's easier to do in the cloud, particularly within AWS, who kind of covers the whole stacks. You're not going to different vendors that aren't integrated. That helps a lot. But you also have architect it, right? And then once you do that and then you automate your software development life cycle process, it makes switching out anyone component of that stack pretty easy to do and highly automated, in some cases completely automated. And so when new service is our new versions of products, new classes of machines become available. We just slip him in, and the term I use this morning mark to market with Moore's Law. That's what we aspire to do to have the highest levels of price performance achievable at the time that it's made available. That wasn't possible previously because you would go by ah hardware kit and then you'd appreciate it for five years on your books at the end of those five years, it would get kind of have scale and reliability problems. And then you go spend tens of millions of dollars on a new kit and the whole cycle would start over again. That's not the case here. >> Machine learning something you've been dipping into. Tell us the impact, what that has and what you see. Going forward. >> It's early, but we're big believers in machine learning. And there's a lot of applications for at Venera in our various investigatory and regulatory functions. Um, again, it's early, but I'm a big believer that the that the computer stored scale, commodity costs in the public cloud could be tapped into and lever it to make Aye aye and machine learning. Achieve what everybody has been talking about it, hoping to achieve the last several decades. We're using it specifically right now in our surveillance is for market manipulation and fraud. So fraudsters coming in and manipulating prices in the stock market to take advantage of trading early days but very promising in terms of what it's delivered so far. >> Steve want to give you the final word. You know, your thank you. First of all for being vocal on this. It sounds like there's a lot of ways for people to understand and see. You know what Fenner has done and really be a you know, an early indicator. So, you know, give us a little bit. Look forward, you know what more? Where's Finn Ra going next on their journey. And what do you want to see more from, You know, Amazon and the ecosystem around them to make your life in life, your peers better. >> Yes. So some of the kind of challenges that Amazon is working with us and partnering Assan is getting Ah Maur, automated into regional fell over our our industries a little bit queasy about having everything run with a relatively tight proximity in the East Coast region. And while we replicate our data to the to the other East region, we think AIM or co production environment, like we have across the availability zones within the East, would be looked upon with Maur advocacy of that architecture. From a regulatory standpoint, that would be one another. One would be, um, one of the big objections to moving to a public cloud vendor like Amazon is the vendor dependency and so making sure that we're not overly technically dependent on them is something that I think is a shared responsibility. The view that you could go and run a single application across multiple cloud vendors. I don't think anybody has been able to successfully do that because of the differences between providers. You could run one application in one vendor and another application in another vendor. That's fine, but that doesn't really achieve the vendor dependency question and then going forward for Finn or I mean, riel beauty is if you architected your applications right without really doing any work at all, you're going to continuously get the benefits of price performance as they go forward. You're not kind of locked into a status quo, So even without doing much of any new work on our applications, we're gonna continue to get the benefits. That's probably outside of the elastic, massive scale that we take advantage of. That's probably the biggest benefit of this whole journey. >> Well, Steve Randall really appreciate >> it. >> Thank you so much for sharing the journey of All right for Cory cleanups to minimum back with lots more here from eight Summit in New York City. Thanks for watching the cue
SUMMARY :
Global Summit 2019 brought to you by Amazon Web service, and the business doesn't think about it business and I t need to be in lobster. And so the decision to go to the cloud S I started That that you talk about the flywheel? And the other thing I would say, this kind of the secret of our success It's not just running the M somewhere else in the same way you would on premise. A lot of the large organizations with a legacy footprint that would How much did they have toe help work, you know, here is that we ll average the engineer, you know, So typically a firm would get by the nature of what you do with very large financial institutions. We are I mean, the only thing that hasn't really been moved to the cloud at this point So you know, why do you feel safe putting and it basically made the point using logic and reasoning, that going to the cloud and doing And to some extent, when you're dealing with banks and start a perspective now and they say, Sloppy migration in the cloud could make you less One of the things that really caught my attention in the keynote was you said the last four years you've done three re And then once you do that and then you Tell us the impact, what that has and what you see. So fraudsters coming in and manipulating prices in the stock market And what do you want to see more from, You know, Amazon and the ecosystem around them to of the elastic, massive scale that we take advantage of. from eight Summit in New York City.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
ORGANIZATION | 0.99+ | |
IBM | ORGANIZATION | 0.99+ |
Steve | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Steve Randall | PERSON | 0.99+ |
March 2013 | DATE | 0.99+ |
Steve Randich | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Cory | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
Rome | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
AP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Seven terabytes | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
Venera | ORGANIZATION | 0.99+ |
Assan | ORGANIZATION | 0.99+ |
Fenner | PERSON | 0.99+ |
50.200 firms | QUANTITY | 0.99+ |
one application | QUANTITY | 0.99+ |
97 plus percent | QUANTITY | 0.99+ |
first year | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
99 plus percent | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
Fender | ORGANIZATION | 0.98+ |
Gardner Forrester | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |
three year | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
two hour | QUANTITY | 0.98+ |
10 15 years | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
single application | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
FINRA | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
ORGANIZATION | 0.96+ | |
East Coast | LOCATION | 0.96+ |
2016 sixteen | DATE | 0.96+ |
Ah Maur | ORGANIZATION | 0.95+ |
Moore's Law | TITLE | 0.95+ |
AWS Summit | EVENT | 0.95+ |
45 years ago | DATE | 0.94+ |
AWS Global Summit 2019 | EVENT | 0.94+ |
one | QUANTITY | 0.93+ |
tens of millions of dollars | QUANTITY | 0.92+ |
Warner Vogel | PERSON | 0.92+ |
about a year | QUANTITY | 0.9+ |
nineties | DATE | 0.9+ |
early, 2014 | DATE | 0.89+ |
three | QUANTITY | 0.88+ |
Lock | ORGANIZATION | 0.88+ |
First | QUANTITY | 0.88+ |
Cloudera | ORGANIZATION | 0.87+ |
minute | QUANTITY | 0.87+ |
600 | QUANTITY | 0.87+ |
each time | QUANTITY | 0.85+ |
1 500,000 compute | QUANTITY | 0.85+ |
half a trillion validation checks per day | QUANTITY | 0.84+ |
5 | QUANTITY | 0.84+ |
One thing | QUANTITY | 0.83+ |
Amazon Web | ORGANIZATION | 0.82+ |
E, | ORGANIZATION | 0.8+ |
last four years | DATE | 0.79+ |
this morning | DATE | 0.79+ |
this decade | DATE | 0.78+ |
Finn Ra | PERSON | 0.76+ |
Finn | ORGANIZATION | 0.74+ |
couple of customers | QUANTITY | 0.72+ |
Finn Ra | ORGANIZATION | 0.7+ |
riel beauty | PERSON | 0.7+ |
theCUBE Insights | IBM CDO Summit 2019
>> Live from San Francisco, California, it's theCUBE covering the IBM Chief Data Officer Summit. Brought to you by IBM. >> Hi everybody, welcome back to theCUBE's coverage of the IBM Chief Data Officer Event. We're here at Fisherman's Wharf in San Francisco at the Centric Hyatt Hotel. This is the 10th anniversary of IBM's Chief Data Officer Summits. In the recent years, anyway, they do one in San Francisco and one in Boston each year, and theCUBE has covered a number of them. I think this is our eighth CDO conference. I'm Dave Vellante, and theCUBE, we like to go out, especially to events like this that are intimate, there's about 140 chief data officers here. We've had the chief data officer from AstraZeneca on, even though he doesn't take that title. We've got a panel coming up later on in the day. And I want to talk about the evolution of that role. The chief data officer emerged out of kind of a wonky, back-office role. It was all about 10, 12 years ago, data quality, master data management, governance, compliance. And as the whole big data meme came into focus and people were realizing that data is the new source of competitive advantage, that data was going to be a source of innovation, what happened was that role emerged, that CDO, chief data officer role, emerged out of the back office and came right to the front and center. And the chief data officer really started to better understand and help companies understand how to monetize the data. Now monetization of data could mean more revenue. It could mean cutting costs. It could mean lowering risk. It could mean, in a hospital situation, saving lives, sort of broad definition of monetization. But it was really understanding how data contributed to value, and then finding ways to operationalize that to speed up time to value, to lower cost, to lower risk. And that required a lot of things. It required new skill sets, new training. It required a partnership with the lines of business. It required new technologies like artificial intelligence, which have just only recently come into a point where it's gone mainstream. Of course, when I started in the business several years ago, AI was the hot topic, but you didn't have the compute power. You didn't have the data, you didn't have the cloud. So we see the new innovation engine, not as Moore's Law, the doubling of transistors every 18 months, doubling of performance. Really no, we see the new innovation cocktail as data as the substrate, applying machine intelligence to that data, and then scaling it with the cloud. And through that cloud model, being able to attract startups and innovation. I come back to the chief data officer here, and IBM Chief Data Officer Summit, that's really where the chief data officer comes in. Now, the role in the organization is fuzzy. If you ask people what's a chief data officer, you'll get 20 different answers. Many answers are focused on compliance, particularly in what emerged, again, in those regulated industries: financial service, healthcare, and government. Those are the first to have chief data officers. But now CDOs have gone mainstream. So what we're seeing here from IBM is the broadening of that role and that definition and those responsibilities. Confusing things is the chief digital officer or the chief analytics officer. Those are roles that have also emerged, so there's a lot of overlap and a lot of fuzziness. To whom should the chief data officer report? Many say it should not be the CIO. Many say they should be peers. Many say the CIO's responsibility is similar to the chief data officer, getting value out of data, although I would argue that's never really been the case. The role of the CIO has largely been to make sure that the technology infrastructure works and that applications are delivered with high availability, with great performance, and are able to be developed in an agile manner. That's sort of a more recent sort of phenomenon that's come forth. And the chief digital officer is really around the company's face. What does that company's brand look like? What does that company's go-to-market look like? What does the customer see? Whereas the chief data officer's really been around the data strategy, what the sort of framework should be around compliance and governance, and, again, monetization. Not that they're responsible for the monetization, but they responsible for setting that framework and then communicating it across the company, accelerating the skill sets and the training of existing staff and complementing with new staff and really driving that framework throughout the organization in partnership with the chief digital officer, the chief analytics officer, and the chief information officer. That's how I see it anyway. Martin Schroeder, the senior vice president of IBM, came on today with Inderpal Bhandari, who is the chief data officer of IBM, the global chief data officer. Martin Schroeder used to be the CFO at IBM. He talked a lot, kind of borrowing from Ginni Rometty's themes in previous conferences, chapter one of digital which he called random acts of digital, and chapter two is how to take this mainstream. IBM makes a big deal out of the fact that it doesn't appropriate your data, particularly your personal data, to sell ads. IBM's obviously in the B2B business, so that's IBM's little back-ended shot at Google and Facebook and Amazon who obviously appropriate our data to sell ads or sell goods. IBM doesn't do that. I'm interested in IBM's opinion on big tech. There's a lot of conversations now. Elizabeth Warren wants to break up big tech. IBM was under the watchful eye of the DOJ 25 years ago, 30 years ago. IBM essentially had a monopoly in the business, and the DOJ wanted to make sure that IBM wasn't using that monopoly to hurt consumers and competitors. Now what IBM did, the DOJ ruled that IBM had to separate its applications business, actually couldn't be in the applications business. Another ruling was that they had to publish the interfaces to IBM mainframes so that competitors could actually build plug-compatible products. That was the world back then. It was all about peripherals plugging into mainframes and sort of applications being developed. So the DOJ took away IBM's power. Fast forward 30 years, now we're hearing Google, Amazon, and Facebook coming under fire from politicians. Should they break up those companies? Now those companies are probably the three leaders in AI. IBM might debate that. I think generally, at theCUBE and SiliconANGLE, we believe that those three companies are leading the charge in AI, along with China Inc: Alibaba, Tencent, Baidu, et cetera, and the Chinese government. So here's the question. What would happen if you broke up big tech? I would surmise that if you break up big tech, those little techs that you break up, Amazon Web Services, WhatsApp, Instagram, those little techs would get bigger. Now, however, the government is implying that it wants to break those up because those entities have access to our data. Google's got access to all the search data. If you start splitting them up, that'll make it harder for them to leverage that data. I would argue those small techs would get bigger, number one. Number two, I would argue if you're worried about China, which clearly you're seeing President Trump is worried about China, placing tariffs on China, playing hardball with China, which is not necessarily a bad thing. In fact, I think it's a good thing because China has been accused, and we all know, of taking IP, stealing IP essentially, and really not putting in those IP protections. So, okay, playing hardball to try to get a quid pro quo on IP protections is a good thing. Not good for trade long term. I'd like to see those trade barriers go away, but if it's a negotiation tactic, okay. I can live with it. However, going after the three AI leaders, Amazon, Facebook, and Google, and trying to take them down or break them up, actually, if you're a nationalist, could be a bad thing. Why would you want to handcuff the AI leaders? Third point is unless they're breaking the law. So I think that should be the decision point. Are those three companies, and others, using monopoly power to thwart competition? I would argue that Microsoft actually did use its monopoly power back in the '80s and '90s, in particular in the '90s, when it put Netscape out of business, it put Lotus out of business, it put WordPerfect out of business, it put Novell out of the business. Now, maybe those are strong words, but in fact, Microsoft's bundling, its pricing practices, caught those companies off guard. Remember, Jim Barksdale, the CEO of Netscape, said we don't need the browser. He was wrong. Microsoft killed Netscape by bundling Internet Explorer into its operating system. So the DOJ stepped in, some would argue too late, and put handcuffs on Microsoft so they couldn't use that monopoly power. And I would argue that you saw from that two things. One, granted, Microsoft was overly focused on Windows. That was kind of their raison d'etre, and they missed a lot of other opportunities. But the DOJ definitely slowed them down, and I think appropriately. And if out of that myopic focus on Windows, and to a certain extent, the Department of Justice and the government, the FTC as well, you saw the emergence of internet companies. Now, Microsoft did a major pivot to the internet. They didn't do a major pivot to the cloud until Satya Nadella came in, and now Microsoft is one of those other big tech companies that is under the watchful eye. But I think Microsoft went through that and perhaps learned its lesson. We'll see what happens with Facebook, Google, and Amazon. Facebook, in particular, seems to be conflicted right now. Should we take down a video that has somewhat fake news implications or is a deep hack? Or should we just dial down? We saw this recently with Facebook. They dialed down the promotion. So you almost see Facebook trying to have its cake and eat it too, which personally, I don't think that's the right approach. I think Facebook either has to say damn the torpedoes. It's open content, we're going to promote it. Or do the right thing and take those videos down, those fake news videos. It can't have it both ways. So Facebook seems to be somewhat conflicted. They are probably under the most scrutiny now, as well as Google, who's being accused, anyway, certainly we've seen this in the EU, of promoting its own ads over its competitors' ads. So people are going to be watching that. And, of course, Amazon just having too much power. Having too much power is not necessarily an indication of abusing monopoly power, but you know the government is watching. So that bears watching. theCUBE is going to be covering that. We'll be here all day, covering the IBM CDO event. I'm Dave Vallente, you're watching theCUBE. #IBMCDO, DM us or Tweet us @theCUBE. I'm @Dvallente, keep it right there. We'll be right back right after this short break. (upbeat music)
SUMMARY :
Brought to you by IBM. Those are the first to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vallente | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Tencent | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jim Barksdale | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Baidu | ORGANIZATION | 0.99+ |
Elizabeth Warren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
Martin Schroeder | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Inderpal Bhandari | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
AstraZeneca | ORGANIZATION | 0.99+ |
China Inc | ORGANIZATION | 0.99+ |
Novell | ORGANIZATION | 0.99+ |
three companies | QUANTITY | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
Netscape | ORGANIZATION | 0.99+ |
Department of Justice | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Third point | QUANTITY | 0.99+ |
@Dvallente | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
three leaders | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
FTC | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
China | ORGANIZATION | 0.98+ |
DOJ | ORGANIZATION | 0.98+ |
20 different answers | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
IBM Chief Data Officer Summit | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
25 years ago | DATE | 0.98+ |
30 years ago | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
10th anniversary | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
Lotus | TITLE | 0.96+ |
IBM CDO Summit 2019 | EVENT | 0.96+ |
theCUBE | EVENT | 0.95+ |
Frank Gens, IDC | Actifio Data Driven 2019
>> From Boston, Massachusets, it's The Cube. Covering Actifio 2019: Data Driven, Brought to you by Actifio. >> Welcome back to Boston, everybody. We're here at the Intercontinental Hotel at Actifio's Data Driven conference, day one. You're watching The Cube. The leader in on-the-ground tech coverage. My name is is Dave Valante, Stu Minamin is here, so is John Ferrer, my friend Frank Gens is here, he's the Senior Vice President and Chief Analyst at IDC and Head Dot Connector. Frank, welcome to The Cube. >> Well thank you Dave. >> First time. >> First time. >> Newbie. >> Yep. >> You're going to crush it, I know. >> Be gentle. >> You know, you're awesome, I've watched you over the many years, of course, you know, you seem to get competitive, and it's like who gets the best rating? Frank always had the best ratings at the Directions conference. He's blushing but I could- >> I don't know if that's true but I'll accept it. >> I could never beat him, no matter how hard I tried. But you are a phenomenal speaker, you gave a great conversation this morning. I'm sure you drew a lot from your Directions talk, but every year you lay down this, you know, sort of, mini manifesto. You describe it as, you connect the dots, IDC, thousands of analysts. And it's your job to say okay, what does this all mean? Not in the micro, let's up-level a little bit. So, what's happening? You talked today, You know you gave your version of the wave slides. So, where are we in the waves? We are exiting the experimentation phase, and coming in to a new phase that multiplied innovation. I saw AI on there, block-chain, some other technologies. Where are we today? >> Yeah, well I think having mental models of the6 industry or any complex system is pretty important. I mean I've made a career dumbing-down a complex industry into something simple enough that I can understand, so we've done it again now with what we call the third platform. So, ten years ago seeing the whole raft of new technologies at the time were coming in that would become the foundation for the next thirty years of tech, so, that's an old story now. Cloud, mobile, social, big data, obviously IOT technologies coming in, block-chain, and so forth. So we call this general era the third platform, but we noticed a few years ago, well, we're at the threshold of kind of a major scale-up of innovation in this third platform that's very different from the last ten or twelve years, which we called the experimentation stage. Where people were using this stuff, using the cloud, using mobile, big data, to create cool things, but they were doing it in kind of a isolated way. Kind of the traditional, well I'm going to invent something and I may have a few friends help me, whereas, the promise of the cloud has been , well, if you have a lot of developers out on the cloud, that form a community, an ecosystem, think of GitHub, you know, any of the big code repositories, or the ability to have shared service as often Amazon, Cloud, or IBM, or Google, or Microsoft, the promise is there to actually bring to life what Bill Joy said, you know, in the nineties. Which was no matter how smart you are, most of the smart people in the world work for someone else. So the questions always been, well, how do I tap into all those other smart people who don't work for me? So we can feel that where we are in the industry right now is the business model of multiplied innovation or if you prefer, a network of collaborative innovation, being able to build something interesting quickly, using a lot of innovation from other people, and then adding your special sauce. But that's going to take the scale of innovation just up a couple of orders of magnitude. And the pace, of course, that goes with that, is people are innovating much more rapid clip now. So really, the full promise of a cloud-native innovation model, so we kind of feel like we're right here, which means there's lots of big changes around the technologies, around kind of the world of developers and apps, AI is changing, and of course, the industry structure itself. You know the power positions, you know, a lot of vendors have spent a lot of energy trying to protect the power positions of the last thirty years. >> Yeah so we're getting into some of that. So, but you know, everybody talks about digital transformation, and they kind of roll their eyes, like it's a big buzzword, but it's real. It's dataware at a data-driven conference. And data, you know, being at the heart of businesses means that you're seeing businesses transition industries, or traverse industries, you know, Amazon getting into groceries, Apple getting into content, Amazon as well, etcetera, etcetera, etcetera, so, my question is, what's a tech company? I mean, you know, Bennyhoff says that, you know, every company's a sass company, and you're certainly seeing that, and it's got to be great for your business. >> Yeah, yeah absolutely >> Quantifying all those markets, but I mean, the market that you quantify is just it's every company now. Banks, insurance companies, grocers, you know? Everybody is a tech company. >> I think, yeah, that's a hundred percent right. It is that this is the biggest revolution in the economy, you know, for many many decades. Or you might say centuries even. Is yeah, whoever put it, was it Mark Andreson or whoever used to talk about software leading the world, we're in the middle of that. Only, software now is being delivered in the form of digital or cloud services so, you know, every company is a tech company. And of course it really raises the question, well what are tech companies? You know, they need to kind of think back about where does our value add? But it is great. It's when we look at the world of clouds, one of the first things we observed in 2007, 2008 was, well, clouds wasn't just about S3 storage clouds, or salesforce.com's softwares and service. It's a model that can be applied to any industry, any company, any offering. And of course we've seen all these startups whether it's Uber or Netflix or whoever it is, basically digital innovation in every single industry, transforming that industry. So, to me that's the exciting part is if that model of transforming industries through the use of software, through digital technology. In that kind of experimentation stage it was mainly a startup story. All those unicorns. To me the multiplied innovation chapter, it's about- (audio cuts out) finally, you know, the cities, the Procter & Gambles, the Walmarts, the John Deere's, they're finally saying hey, this cloud platform and digital innovation, if we can do that in our industry. >> Yeah, so intrapreneurship is actually, you know, starting to- >> Yeah. >> So you and I have seen a lot of psychos, we watched the you know, the mainframe wave get crushed by the micro-processor based revolution, IDC at the time spent a lot of time looking at that. >> Vacuum tubes. >> Water coolant is back. So but the industry has marched to the cadence of Moore's Law forever. Even Thomas Friedman when he talks about, you know, his stuff and he throws in Moore's Law. But no longer Moore's Law the sort of engine of innovation. There's other factors. So what's the innovation cocktail looking forward over the next ten years? You've talked about cloud, you know, we've talked about AI, what's that, you know, sandwich, the innovation sandwich look like? >> Yeah so to me I think it is the harnessing of all this flood of technologies, again, that are mainly coming off the cloud, and that parade is not stopping. Quantum, you know, lots of other technologies are coming down the pipe. But to me, you know, it is the mixture of number one the cloud, public cloud stacks being able to travel anywhere in the world. So take the cloud on the road. So it's even, I would say, not even just scale, I think of, that's almost like a mount of compute power. Which could happen inside multiple hyperscale data centers. I'm also thinking about scale in terms of the horizontal. >> Bringing that model anywhere. >> Take me out to the edge. >> Wherever your data lives. >> Take me to a Carnival cruise ship, you know, take me to, you know, an apple-powered autonomous car, or take me to a hospital or a retail store. So the public cloud stacks where all the innovation is basically happening in the industry. Jail-breaking that out so it can come, you know it's through Amazon, AWS Outpost, or Ajerstack, or Google Anthos, this movement of the cloud guys, to say we'll take public cloud innovation wherever you need it. That to me is a big part of the cocktail because that's you know, basically the public clouds have been the epicenter of most tech innovation the last three or four years, so, that's very important. I think, you know just quickly, the other piece of the puzzle is the revolution that's happening in the modularity of apps. So the micro services revolution. So, the building of new apps and the refactoring of old apps using containers, using servos technologies, you know, API lifecycle management technologies, and of course, agile development methods. Kind of getting to this kind of iterative sped up deployment model, where people might've deployed new code four times a year, they're now deploying it four times a minute. >> Yeah right. >> So to me that's- and kind of aligned with that is what I was mentioning before, that if you can apply that, kind of, rapid scale, massive volume innovation model and bring others into the party, so now you're part of a cloud-connected community of innovators. And again, that could be around a Github, or could be around a Google or Amazon, or it could be around, you know, Walmart. In a retail world. Or an Amazon in retail. Or it could be around a Proctor & Gamble, or around a Disney, digital entertainment, you know, where they're creating ecosystems of innovators, and so to me, bringing people, you know, so it's not just these technologies that enable rapid, high-volume modular innovation, but it's saying okay now plugging lots of people's brains together is just going to, I think that, here's the- >> And all the data that throws off obviously. >> Throws a ton of data, but, to me the number we use it kind of is the punchline for, well where does multiplied innovation lead? A distributed cloud, this revolution in distributing modular massive scale development, that we think the next five years, we'll see as many new apps developed and deploye6d as we saw developed and deployed in the last forty years. So five years, the next five years, versus the last forty years, and so to me that's, that is the revolution. Because, you know, when that happens that means we're going to start seeing that long tail of used cases that people could never get to, you know, all the highly verticalized used cases are going to be filled, you know we're going to finally a lot of white space has been white for decades, is going to start getting a lot of cool colors and a lot of solutions delivered to them. >> Let's talk about some of the macro stuff, I don't know the exact numbers, but it's probably three trillion, maybe it's four trillion now, big market. You talked today about the market's going two x GDP. >> Yeah. >> For the tech market, that is. Why is it that the tech market is able to grow at a rate faster than GDP? And is there a relationship between GDP and tech growth? >> Yeah, well, I think, we are still, while, you know, we've been in tech, talk about those apps developed the last forty years, we've both been there, so- >> And that includes the iPhone apps, too, so that's actually a pretty impressive number when you think about the last ten years being included in that number. >> Absolutely, but if you think about it, we are still kind of teenagers when you think about that Andreson idea of software eating the world. You know, we're just kind of on the early appetizer, you know, the sorbet is coming to clear our palates before we go to the next course. But we're not even close to the main course. And so I think when you look at the kind of, the percentage of companies and industry process that is digital, that has been highly digitized. We're still early days, so to me, I think that's why. That the kind of the steady state of how much of an industry is kind of process and data flow is based on software. I'll just make up a number, you know, we may be a third of the way to whatever the steady state is. We've got two-thirds of the way to go. So to me, that supports growth of IT investment rising at double the rate of overall. Because it's sucking in and absorbing and transforming big pieces of the existing economy, >> So given the size of the market, given that all companies are tech companies. What are your thoughts on the narrative right now? You're hearing a lot of pressure from, you know, public policy to break up big tech. And we saw, you know you and I were there when Microsoft, and I would argue, they were, you know, breaking the law. Okay, the Department of Justice did the right thing, and they put handcuffs on them. >> Yeah. >> But they never really, you know, went after the whole breakup scenario, and you hear a lot of that, a lot of the vitriol. Do you think that makes sense? To break up big tech and what would the result be? >> You don't think I'm going to step on those land mines, do you? >> Okay well I've got an opinion. >> Alright I'll give you mine then. Alright, since- >> I mean, I'll lay it out there, I just think if you break up big tech the little techs are going to get bigger. It's going to be like AT&T all over again. The other thing I would add is if you want to go after China for, you know, IP theft, okay fine, but why would you attack the AI leaders? Now, if they're breaking the law, that should not be allowed. I'm not for you know, monopolistic, you know, illegal behavior. What are your thoughts? >> Alright, you've convinced me to answer this question. >> We're having a conversation- >> Nothing like a little competitive juice going. You're totally wrong. >> Lay it out for me. >> No, I think, but this has been a recurring pattern, as you were saying, it even goes back further to you know, AT&T and people wanting to connect other people to the chiraphone, and it goes IBM mainframes, opening up to peripherals. Right, it goes back to it. Exactly. It goes back to the wheel. But it's yeah, to me it's a valid question to ask. And I think, you know, part of the story I was telling, that multiplied innovation story, and Bill Joy, Joy's Law is really about platform. Right? And so when you get aggregated portfolio of technical capabilities that allow innovation to happen. Right, so the great thing is, you know, you typically see concentration, consolidation around those platforms. But of course they give life to a lot of competition and growth on top of them. So that to me is the, that's the conundrum, because if you attack the platform, you may send us back into this kind of disaggregated, less creative- so that's the art, is to take the scalpel and figure out well, where are the appropriate boundaries for, you know, putting those walls, where if you're in this part of the industry, you can't be in this. So, to me I think one, at least reasonable way to think about it is, so for example, if you are a major cloud platform player, right, you're providing all of the AI services, the cloud services, the compute services, the block-chain services, that a lot of the sass world is using. That, somebody could argue, well, if you get too strong in the sass world, you then could be in a position to give yourself favorable position from the platform. Because everyone in the sass world is depending on the platform. So somebody might say you can't be in. You know, if you're in the sass position you'll have to separate that from the platform business. But I think to me, so that's a logical way to do it, but I think you also have to ask, well, are people actually abusing? Right, so I- >> I think it's a really good question. >> I don't think it's fair to just say well, theoretically it could be abused. If the abuse is not happening, I don't think you, it's appropriate to prophylactically, it's like go after a crime before it's committed. So I think, the other thing that is happening is, often these monopolies or power positions have been about economic power, pricing power, I think there's another dynamic happening because consumer date, people's data, the Facebook phenomenon, the Twitter and the rest, there's a lot of stuff that's not necessarily about pricing, but that's about kind of social norms and privacy that I think are at work and that we haven't really seen as big a factor, I mean obviously we've had privacy regulation is Europe with GDPR and the rest, obviously in check, but part of that's because of the social platforms, so that's another vector that is coming in. >> Well, you would like to see the government actually say okay, this is the framework, or this is what we think the law should be. I mean, part of it is okay, Facebook they have incentive to appropriate our data and they get, okay, and maybe they're not taking enough responsibility for. But I to date have not seen the evidence as we did with, you know, Microsoft wiping out, you know, Lotus, and Novel, and Word Perfect through bundling and what it did to Netscape with bundling the browser and the price practices that- I don't see that, today, maybe I'm just missing it, but- >> Yeah I think that's going to be all around, you know, online advertising, and all that, to me that's kind of the market- >> Yeah, so Google, some of the Google stuff, that's probably legit, and that's fine, they should stop that. >> But to me the bigger issue is more around privacy.6 You know, it's a social norm, it's societal, it's not an economic factor I think around Facebook and the social platforms, and I think, I don't know what the right answer is, but I think certainly government it's legitimate for those questions to be asked. >> Well maybe GDPR becomes that framework, so, they're trying to give us the hook but, I'm having too much fun. So we're going to- I don't know how closely you follow Facebook, I mean they're obviously big tech, so Facebook has this whole crypto-play, seems like they're using it for driving an ecosystem and making money. As opposed to dealing with the privacy issue. I'd like to see more on the latter than the former, perhaps, but, any thoughts on Facebook and what's going on there with their crypto-play? >> Yeah I don't study them all that much so, I am fascinated when Mark Zuckerberg was saying well now our key business now is about privacy, which I find interesting. It doesn't feel that way necessarily, as a consumer and an observer, but- >> Well you're on Facebook, I'm on Facebook, >> Yeah yeah. >> Okay so how about big IPOs, we're in the tenth year now of this huge, you know, tail-wind for tech. Obviously you have guys like Uber, Lyft going IPO,6 losing tons of money. Stocks actually haven't done that well which is kind of interesting. You saw Zoom, you know, go public, doing very well. Slack is about to go public. So there's really a rush to IPO. Your thoughts on that? Is this sustainable? Or are we kind of coming to the end here? >> Yeah so, I think in part, you know, predicting the stock market waves is a very tough thing to do, but I think one kind of secular trend is going to be relevant for these tech IPOs is what I was mentioning earlier, is that we've now had a ten, twelve year run of basically startups coming in and reinventing industries while the incumbents in the industries are basically sitting on their hands, or sleeping. So to me the next ten years, those startups are going to, not that, I mean we've seen that large companies waking up doesn't necessarily always lead to success but it feels to me like it's going to be a more competitive environment for all those startups Because the incumbents, not all of them, and maybe not even most of them, but some decent portion of them are going to wind up becoming digital giants in their own industry. So to me I think that's a different world the next ten years than the last ten. I do think one important thing, and I think around acquisitions MNA, and we saw it just the last few weeks with Google Looker and we saw Tab Low with Salesforce, is if that, the mega-cloud world of Microsoft, Ajer, and Amazon, Google. That world is clearly consolidating. There's room for three or four global players and that game is almost over. But there's another power position on top of that, which is around where did all the app, business app guys, all the suite guys, SAP, Oracle, Salesforce, Adobe, Microsoft, you name it. Where did they go? And so we see, we think- >> Service Now, now kind of getting big. >> Absolutely, so we're entering a intensive period, and I think again, the Tab Low and Looker is just an example where those companies are all stepping on the gas to become better platforms. So apps as platforms, or app portfolio as platforms, so, much more of a data play, analytics play, buying other pieces of the app portfolio, that they may not have. And basically scaling up to become the business process platforms and ecosystems there. So I think we are just at the beginning of that, so look for a lot of sass companies. >> And I wonder if Amazon could become a platform for developers to actually disrupt those traditional sass guys. It's not obvious to me how those guys get disrupted, and I'm thinking, everybody says oh is Amazon going to get into the app space? Maybe some day if they happen to do a cam expans6ion, But it seems to me that they become a platform fo6r new apps you know, your apps explosion.6 At the edge, obviously, you know, local. >> Well there's no question. I think those appcentric apps is what I'd call that competition up there and versus kind of a mega cloud. There's no question the mega cloud guys. They've already started launching like call center, contact center software, they're creeping up into that world of business apps so I don't think they're going to stop and so I think that that is a reasonable place to look is will they just start trying to create and effect suites and platforms around sass of their own. >> Startups, ecosystems like you were saying. Alright, I got to give you some rapid fire questions here, so, when do you think, or do you think, no, I'm going to say when you think, that owning and driving your own car will become the exception, rather than the norm? Buy into the autonomous vehicles hype? Or- >> I think, to me, that's a ten-year type of horizon. >> Okay, ten plus, alright. When will machines be able to make better diagnosis than than doctors? >> Well, you could argue that in some fields we're almost there, or we're there. So it's all about the scope of issue, right? So if it's reading a radiology, you know, film or image, to look for something right there, we're almost there. But for complex cancers or whatever that's going to take- >> One more dot connecting question. >> Yeah yeah. >> So do you think large retail stores will essentially disappear? >> Oh boy that's a- they certainly won't disappear, but I think they can so witness Apple and Amazon even trying to come in, so it feels that the mix is certainly shifting, right? So it feels to me that the model of retail presence, I think that will still be important. Touch, feel, look, socialize. But it feels like the days of, you know, ten thousand or five thousand store chains, it feels like that's declining in a big way. >> How about big banks? You think they'll lose control of the payment systems? >> I think they're already starting to, yeah, so, I would say that is, and they're trying to get in to compete, so I think that is on its way, no question. I think that horse is out of the barn. >> So cloud, AI, new apps, new innovation cocktails, software eating the world, everybody is a tech company. Frank Gens, great to have you. >> Dave, always great to see you. >> Alright, keep it right there buddy. You're watching The Cube, from Actifio: Data Driven nineteen. We'll be right back right after this short break. (bouncy electronic music)
SUMMARY :
Brought to you by Actifio. We're here at the Intercontinental Hotel at many years, of course, you know, You know you gave your version of the wave slides. an ecosystem, think of GitHub, you know, I mean, you know, Bennyhoff says that, you know, that you quantify is just it's every company now. digital or cloud services so, you know, we watched the you know, the mainframe wave get crushed we've talked about AI, what's that, you know, sandwich, you know, it is the mixture of number one the cocktail because that's you know, and so to me, bringing people, you know, are going to be filled, you know we're going to I don't know the exact numbers, but it's probably Why is it that the tech market is able to grow And that includes the iPhone apps, too, And so I think when you look at the and I would argue, they were, you know, breaking the law. But they never really, you know, Alright I'll give you mine then. the little techs are going to get bigger. Nothing like a little competitive juice going. so that's the art, is to take the scalpel I don't think it's fair to just say well, as we did with, you know, Microsoft wiping out, you know, Yeah, so Google, some of the Google stuff, and the social platforms, and I think, I don't know I don't know how closely you follow Facebook, I am fascinated when Mark Zuckerberg was saying of this huge, you know, tail-wind for tech. Yeah so, I think in part, you know, predicting the buying other pieces of the app portfolio, At the edge, obviously, you know, local. and so I think that that is a reasonable place to look Alright, I got to give you some rapid fire questions here, diagnosis than than doctors? So if it's reading a radiology, you know, film or image, But it feels like the days of, you know, I think that horse is out of the barn. software eating the world, everybody is a tech company. We'll be right back right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Valante | PERSON | 0.99+ |
Stu Minamin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
John Ferrer | PERSON | 0.99+ |
Procter & Gambles | ORGANIZATION | 0.99+ |
Thomas Friedman | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Frank Gens | PERSON | 0.99+ |
three trillion | QUANTITY | 0.99+ |
Mark Andreson | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Bill Joy | PERSON | 0.99+ |
John Deere | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
four trillion | QUANTITY | 0.99+ |
Disney | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adobe | ORGANIZATION | 0.99+ |
ten-year | QUANTITY | 0.99+ |
tenth year | QUANTITY | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Bennyhoff | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
third platform | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
MNA | ORGANIZATION | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Ajer | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
ten | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
ten thousand | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Salesforce | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
Ajerstack | ORGANIZATION | 0.98+ |
Proctor & Gamble | ORGANIZATION | 0.98+ |
SAP | ORGANIZATION | 0.98+ |
Slack | ORGANIZATION | 0.98+ |
First time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
apple | ORGANIZATION | 0.98+ |
two-thirds | QUANTITY | 0.98+ |
Department of Justice | ORGANIZATION | 0.98+ |
Lotus | TITLE | 0.97+ |
Word Perfect | TITLE | 0.97+ |
The Cube | TITLE | 0.97+ |
five years | QUANTITY | 0.97+ |
AWS Outpost | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
decades | QUANTITY | 0.96+ |
ten years ago | DATE | 0.96+ |
Andy Isherwood, AWS EMEA | On the Ground at AWS UK 2019
(electronic music) >> Welcome back to London everybody, this is Dave Vellante with theCUBE, the leader in tech coverage. We're here with a special session in London, we've been following the career of Teresa Carlson around, we asked, "hey, can we come to London to your headquarters there and interview some of the leaders and some of the startups and innovators both in public sector and commercial?" Andy Isherwood is here, he's the managing director of AWS EMEA. Andy, thanks for coming on theCUBE. >> Dave, great to be here, thank you very much for your time. >> So you're about a year in, so that's plenty of time to get acclimated, what are your impressions of AWS and then we'll get into the market? >> Yeah, so it's nearly a year and a half actually, so time definitely goes pretty quickly. So I'd say it's pretty different, I'd say probably a couple of things kind of jump out at me. One is, I think we just have a startup mentality in everything we do. So, y'know, if you think about everything we do kind of works back from the customer and we really feel like a kind of startup at heart. And we always say, y'know, within the organization, we should also make it feel like day one. If we get to day two, y'know, the game's over. So we always try and make day one something that's kind of relevant in what we're doing. I think the second thing is customer obsession. I think we are truly customer obsessed. And you could say that most organizations actually say, y'know, they're customer obsessed. I'd say we're truly customer obsessed in everything we do so if you think about our re:Invent program, if you think about, y'know, London, the summit coming up, what you will notice is that there will be customers everywhere, speaking about their experiences and that's really important. So we start with the customer and we always work back. So super important that we never forget that and if you think about how we develop our services, they start with the customer. We don't go out like a product company would and make great products and sell them. We start with the customer, work back, develop the solutions and then let the customer use them, and we iterate on those developments. So I'd say it's pretty different in those two aspects. I'd say the other thing is, it's just hugely relevant. Every customer I go into, and I've seen hundreds of customers in the last year and a half, were hugely relevant. Y'know, we are at the heart of what people want to do and need to do, which makes it important. >> Yeah, so we've been following the career of Andy Jassy for years and we've learnt about the Working Backwards documents, certainly you guys are raising the bar all the time, is sort of the mantra, and yeah, customer centricity, you said it's different, y'know, we do over a hundred events every year and every company out there talks about, "we're focused on the customer", but what makes AWS different? >> I think it's the fact that we truly listen and work back from the customer. So, y'know, we're not a product company, we don't make products with great R&D people and then take them and sell them. We don't obsess about the competition, y'know, we start with the customer, we go and speak to the customer, I think we listen intently to what they need, and we help them look round corners. We help them think about what they need to do for them to be successful, then we work back and probably 90% of what we do is fundamentally developed from those insights that the customer gives us. That's quite different. That really is a working back methodology. >> We run most of our business on AWS and it's true, so I remember we were in a meeting with Andy Jassy one time and he started asking us how we use the platform and what we like about it and don't like about it, and my business partner, John Furrier, he's kind of our CTO, he starts rattling off a number of things that he wanted to see, and Andy pulls out his pad and he starts writing it down, and he was asking questions back and forth, so I think I've seen that in action. One of the things that we've observed is that the adoption of cloud in EMEA and worldwide is pretty consistent and ubiquitous, there's not like a big gap, y'know, you used to see years later, y'know, Europe would maybe adopt a technology and you're seeing actually in many cases, you certainly see it with mobile, you're seeing greater advancements. GDPR, obviously, is a template for privacy, what are you seeing in Europe in terms of some of the major trends of cloud adoption? >> Yeah, I don't think we're seeing major differences, y'know, people talk a lot about, "well, Europe must be two years behind North America" in terms of adoption. We don't see that, I think it is slightly slower in some countries, but I don't think that's kind of common across the piste. So I'd say that the adoption, and if you think back to some customers that were very early adopters, just from an overall global cloud perspective, companies like Shell, for example, y'know they were really early adopters, and those were European-based companies, you could say they're global companies, absolutely, but a lot of what they did was developed in Europe. So I would say that there are countries that are slower to adopt, sometimes driven by the fact that, y'know, security is an issue, or was an issue, that data sovereignty was a bigger issue for some of these countries. But I think all of those are pretty much passed now, so I think we are very quickly kind of catching up with regards to the North American market. So, yeah. >> You mentioned your sort of startup mentality, you mentioned BP. Is it divisions within a large company like that that are startup-like? Is that what you're seeing in terms of the trends? >> No, I'm seeing three patterns. So I'm seeing a pattern which is, y'know, large organizations that go all-in very quickly, typically, y'know, strong leadership, clear vision, need to move quickly. >> Dave Vellante: We're going cloud? >> Yeah, we're going cloud, and we're going all in and that may be, like an NL would be a great example. So NL's a really good example of a top-down approach, very progressive CIO, very clear-thinking CEO that's driven adoption. So I'd say that's pattern one. For me, pattern two is where large organizations create an entity alongside, so almost a separate business. So probably Openbank is probably a good example, part of Santander. And now that organization has about one and a half million customers, obviously started in Spain, but they built a digital bank, clearly tapping into all of the data and customer sets within Santander, but building an experience which is fundamentally different. >> So a skunkworks that really grew and grew? >> Correct, absolutely, a skunkworks that grew, but grew quickly and now it's becoming y'know, a key part of their business. And then the third area, or the third pattern for me is very much a kind of a bottoms-up-led approach. So this is where the developers basically love the services that we have, they use the services, they typically put them on their credit card or AMEX, and then they'll go and use the services and create real value. That value is then seen and it snowballs. So those are kind of the three patterns. I'd say the only outlier to those three patterns is a startup organization, and as you know we've been hugely successful with startups, from, y'know, Pinterest, to Uber, to Careem, to all of these organizations and those organizations it's really important to influence them early on, to make sure that they are aware, and the developer community and the founders are aware of what we can do and we have a number of programs to really help them do that. And they start to use our services, and as those organizations are successful then our business grows alongside them and they, y'know, typically start to use a lot more of the services. >> One of the defining patterns of three, the bottoms-up and four, the start-ups, is they code infrastructure. And, y'know, sometimes the one, the top-down may not have the skillsets and the disciplines and the structure to do that. What are you seeing in terms of that whole programmable infrastructure, the skillsets, programmers essentially coding the infrastructure? Are you seeing CIOs come in and say, "Okay, we need to re-skill", are they bringing in new staff, kind of like number two, the Openbank example might be, y'know, some rockstars that they wanna sort of assign to the skunkwork. How is the number one category dealing with that in terms of their digital transformation? >> Yeah, so y'know, skills is something that is critically important, having the right skills in the right place at the right time. And if you think about Europe it's a big outsourced market, so a lot of those skills were outsourced typically to a lot of the outsourcing companies, as you'd expect. What you're seeing now is organizations, BP's a good example of this, where they're building the innovation capability back into their organizations to make sure that they can create the offerings and create the user experience and create the business models for the new world. And what we're doing is really trying to make sure that we're enabling those organizations to build the skills. So probably at a number of different levels, kind of, y'know, very basic level, or at a very junior level we're kind of influencing people in schools. So, y'know, we're going to be announcing, or announcing at the summit, Guess IT, which is basically a program to train up year eight students. So you start there, and basically you go all the way through to offering training and certification, we have a very big function associated with that to make sure that we're building the right skills for organizations to be successful, and also then working with partners, so all of those training and certification skills, we are working with the partners like the Cloudreaches of this world, but also the DXCs of this world, the Accentures of this world, the Atoses of this world, really to make sure that they have the right skills and capability, not only around our services but around the movement to cloud which is what these organizations need to do to help them innovate. >> And it sounds like your customers wanna learn how to fish, they see that as IP, in a sense, still work with partners, but help them transfer that knowledge and then, y'know, continue to innovate, raise the bars, as we like to say. >> Yes, yes. >> One of the biggest challenges that we see, we talk to customers all the time, is the data challenge. Particularly companies that have been around for a while, they have a lot of technical debt, the data's locked into these hardened silos, obviously I'm sure you see that as a challenge, maybe can you address that, how you're helping customers deal with that challenge and some of the other things that you see cloud addressing? >> Yeah, so y'know, we're really trying to help customers be successful in doing what they do in the timescale that they're setting themselves, and we're helping them be successful. I think from a data point of view, we have a lot of capability, so just to give you a perspective, so since I've been here that year and a half, we started with 125 services. That number of services has gone to 170-odd services now and the innovation that we have within those services has now reached, I think last year, just over the 1900 level so this is iterations on the product. In addition to that, we are continually building new offerings, so if you think about our database strategy, y'know, it's very much to create databases that customers can use in the right way at the right time to do the right job and that's just not one database, it's a number of different databases tuned for specific needs. So we have 14 databases, for example, which are really geared to make customers use the right database at the right time to achieve the right outcome, and we think that's really important, so that's helping people basically use their data in a different way. Obviously our S3, our core storage offering is critically important and hugely successful. We think that as-is, the bedrock for how people think about their data and then they expand and use data lakes, and then underpinning that is making sure that they've got the right databases to support and use that data effectively. >> At the start of this millennium there was like a few databases, databases was a boring marketplace and now it's exploded, as Inova says, dozens a minute it's actually amazing >> Yep >> how much innovation there is occurring in that space. What's your vision for AWS in EMEA? >> Yeah, so you know the overall Amazon vision is to be the world's most customer-obsessed organization, so y'know, here in EMEA, that holds true, so y'know, we start with the customer, we work back, and we wanna make sure that every single customer's happy with what we're doing. I think the second thing is making sure that we are bringing and enabling customers to be innovative. This is really important to us, and it's really important to the customers that we sell to, y'know, there's many insurgents kind of attacking historic business models, it's really important that we give all of the organizations the ability to use technology, whether they're a small company or a big company. And we call that the democratization of IT, we're making things available that were only available to big companies a while back. Now, we have made those services available to pretty much every single company, whether you're a startup in garage, y'know, to a large global organization. So that's really important that we bring and we continue to democratize IT to make it available for the masses, so that they can go out there and innovate and do what ultimately, customers wanna do, y'know, customers want people to innovate. Customers want a different experience. And it's important that we give organizations the tools and the wherewithal to go and do that. >> Well you've been in the industry long enough, and you've worked at product companies prior to this part of your career, and you know the innovation engine used to be Moore's Law. It used to be how fast can I take advantage of that curve, and that's totally changed now. You see a number of things happening, it's get rid of the heavy lifting, so you can focus on your business, that's what cloud does for you, but it's kind of this combination, the cocktail of data, plus machine intelligence, and then the cloud brings scale, it attracts innovative companies. How do you see, first of all do you buy that sort of new cocktail, and how do you see customers applying that innovation engine? >> Yeah, y'know, to answer the first bit first, we definitely see that cocktail. So y'know, the kind of undifferentiated work that was historically done to kind of build servers and make sure that they ran and all of those things, people don't need to do that now. We do that really really effectively. So they can really focus their time, attention, their money, their efforts, their innovation, on creating new experiences, new products, new offerings, for their customers. And they should also work back from customers themselves and work out what's really required. Every single business model, every single offering, needs to be questioned, by every single organization and I think that's what we do. We give the ability to organizations to really think differently about how they use what we have to do the really important things, the things that differentiate them and the things that ultimately give customers a different experience. And that's why I think we've seen so many very successful companies, y'know, from Airbnb, to Pinterest, to Uber. It's giving people a fundamentally different experience and that's what people want, so y'know, we're here to I think give people the ability to create those different experiences. >> Kind of amazing when you go back and you remember the book Does IT Matter? the Havard Business Review famous... It couldn't have been more wrong, at the same time it couldn't have been more right because it really underscored that IT was broken and that preceded 2006 introduction of EC2 and now technology matters more than ever before, every company's a technology company, y'know, you hear Marc Bennioff talk about software's eating the world, it's so true, and so as companies become technology companies, what's your advice to them? I mean obviously you gotta say, "Let us handle the heavy lifting," but what do they have to do to succeed in their digital transformation in your view? >> Yeah, I think it's about changing the mindset and changing the culture of organizations. So I think you can try and instill new processes and new tools on an organization but fundamentally you've gotta change the culture and I think we have to create and enable cultures to be created that are innovative and that requires, I think, a very different mindset. That requires a mindset which is about, "we don't mind if you fail". Y'know, and we'll applaud failure. We in Amazon have had many failures but it's applauded, and if it's applauded, people try again so they'll dust themselves off and they'll move on. You can see this in Israel which is, y'know, very much a startup nation. You can see people start a business, they might fail. Next day, they start a new one. So I think it's having this culture of innovation that allows people to experiment. Experimentation's good, but it's also prone to failure. But, y'know, out of 10 experiments you're gonna get one that's successful. That one could be the make or break for your organization to move forward, and give customers what they actually need, so, y'know, super important. >> Break things, move fast, right? >> Exactly. >> I love it. All right, what should we expect tomorrow at the London summit? We gotta big crowd coming, it's at the ExCeL Center >> Yeah, I think you'll see us continue to innovate, I think you'll see a lot of people, and I think you'll see a lot of customers talk about their experience and share their experience, y'know, these are learning summits, y'know, they're not kind of show and tell, they're very much about explaining what other customers are doing, how people can use the innovation and you'll see lots of experiences from different customers that people will be able to take away and learn from and go back to their offices and do similar things, but probably in a different way. So, y'know there'll be lots of exciting announcements, as you saw from re:Invent, we continue to innovate at a fair clip, as I said, 1950-odd innovations, y'know, significant releases last year, so not surprisingly you'll see a few of those. >> These summits are like mini re:Invents, aren't they? And as you said, Andy, very customer-focused, customer-centric; a lot of customer content. So, Andy Isherwood, thanks so much for coming on theCUBE, it was really great to have you. >> Great >> All right. >> Thank you >> You're welcome Keep it right there everybody, we'll be back with our next guest right after this short break. This is Dave Vellente, you're watching theCUBE.
SUMMARY :
to your headquarters there and interview Dave, great to be here, and need to do, which makes it important. I think we listen intently to what they need, and he started asking us how we use the platform So I'd say that the adoption, and if you think back Is that what you're seeing in terms of the trends? So I'm seeing a pattern which is, y'know, and that may be, like an NL would be a great example. I'd say the only outlier to those three patterns and the structure to do that. but around the movement to cloud which is what as we like to say. and some of the other things that you see cloud addressing? and the innovation that we have within those services What's your vision for AWS in EMEA? and it's really important to the customers that we sell to, and you know the innovation engine used to be Moore's Law. and that's what people want, so y'know, and you remember the book Does IT Matter? and I think we have to create and enable cultures We gotta big crowd coming, it's at the ExCeL Center and learn from and go back to their offices And as you said, Andy, very customer-focused, This is Dave Vellente, you're watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellente | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Andy Isherwood | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Marc Bennioff | PERSON | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
14 databases | QUANTITY | 0.99+ |
Shell | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Openbank | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
EMEA | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Israel | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
125 services | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Careem | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two aspects | QUANTITY | 0.99+ |
10 experiments | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
Santander | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
BP | ORGANIZATION | 0.98+ |
North America | LOCATION | 0.98+ |
three | QUANTITY | 0.98+ |
AMEX | ORGANIZATION | 0.98+ |
dozens a minute | QUANTITY | 0.98+ |
ExCeL Center | LOCATION | 0.98+ |
third pattern | QUANTITY | 0.98+ |
Does IT Matter? | TITLE | 0.98+ |
three patterns | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
third area | QUANTITY | 0.97+ |
EC2 | TITLE | 0.97+ |
1950 | DATE | 0.96+ |
one database | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
day two | QUANTITY | 0.95+ |
day one | QUANTITY | 0.95+ |
first bit | QUANTITY | 0.94+ |
last year and a half | DATE | 0.94+ |
four | QUANTITY | 0.94+ |
about one and a half million customers | QUANTITY | 0.91+ |
Inova | ORGANIZATION | 0.91+ |
year | QUANTITY | 0.9+ |
AWS EMEA | ORGANIZATION | 0.89+ |
hundreds of customers | QUANTITY | 0.89+ |
a year and a half | QUANTITY | 0.89+ |
years later | DATE | 0.89+ |
170-odd services | QUANTITY | 0.88+ |
Next day | DATE | 0.86+ |
about a year | QUANTITY | 0.85+ |
one time | QUANTITY | 0.85+ |
pattern two | QUANTITY | 0.81+ |
North American | LOCATION | 0.81+ |