Image Title

Search Results for Eva Casey Velasquez:

Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1


 

(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert NishiharaPERSON

0.99+

JohnPERSON

0.99+

RobertPERSON

0.99+

John FurrierPERSON

0.99+

NetflixORGANIZATION

0.99+

35 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

UberORGANIZATION

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Ant GroupORGANIZATION

0.99+

firstQUANTITY

0.99+

PythonTITLE

0.99+

20%QUANTITY

0.99+

32 GPUsQUANTITY

0.99+

LyftORGANIZATION

0.99+

hundredsQUANTITY

0.99+

tomorrowDATE

0.99+

AnyscaleORGANIZATION

0.99+

threeQUANTITY

0.99+

128QUANTITY

0.99+

SeptemberDATE

0.99+

todayDATE

0.99+

Moore's LawTITLE

0.99+

Adam SelipskyPERSON

0.99+

PyTorchTITLE

0.99+

RayORGANIZATION

0.99+

second reasonQUANTITY

0.99+

64QUANTITY

0.99+

each workerQUANTITY

0.99+

each workerQUANTITY

0.99+

PhotoshopTITLE

0.99+

UC BerkeleyORGANIZATION

0.99+

JavaTITLE

0.99+

ShopifyORGANIZATION

0.99+

OpenAIORGANIZATION

0.99+

AnyscalePERSON

0.99+

thirdQUANTITY

0.99+

two thingsQUANTITY

0.99+

ByteDanceORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

OneQUANTITY

0.99+

95QUANTITY

0.99+

AsureORGANIZATION

0.98+

one lineQUANTITY

0.98+

one GPUQUANTITY

0.98+

ChatGPTTITLE

0.98+

TensorFlowTITLE

0.98+

last yearDATE

0.98+

first bucketQUANTITY

0.98+

bothQUANTITY

0.98+

two layersQUANTITY

0.98+

CohereORGANIZATION

0.98+

AlipayORGANIZATION

0.98+

RayPERSON

0.97+

oneQUANTITY

0.97+

InstacartORGANIZATION

0.97+

Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)

Published Date : Mar 12 2022

SUMMARY :

in Palo Alto in Boston, and Intel is the steward of Moore's Law.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Cheryl KnightPERSON

0.99+

Pat GelsingerPERSON

0.99+

NvidiaORGANIZATION

0.99+

PatPERSON

0.99+

Rob HofPERSON

0.99+

AppleORGANIZATION

0.99+

DavidPERSON

0.99+

TSMORGANIZATION

0.99+

OhioLOCATION

0.99+

February 18thDATE

0.99+

MobileyeORGANIZATION

0.99+

2012DATE

0.99+

$100 millionQUANTITY

0.99+

two yearsQUANTITY

0.99+

80%QUANTITY

0.99+

ArizonaLOCATION

0.99+

WrightPERSON

0.99+

18 monthsQUANTITY

0.99+

2017DATE

0.99+

2023DATE

0.99+

AMDORGANIZATION

0.99+

6xQUANTITY

0.99+

Kristen MartinPERSON

0.99+

Palo AltoLOCATION

0.99+

20%QUANTITY

0.99+

15%QUANTITY

0.99+

two chipsQUANTITY

0.99+

2xQUANTITY

0.99+

$50 billionQUANTITY

0.99+

100 millionQUANTITY

0.99+

$1.5 billionQUANTITY

0.99+

2030sDATE

0.99+

2030DATE

0.99+

IntelORGANIZATION

0.99+

CHIPS ActTITLE

0.99+

last yearDATE

0.99+

$10 billionQUANTITY

0.99+

2020sDATE

0.99+

50%QUANTITY

0.99+

2026DATE

0.99+

two-yearQUANTITY

0.99+

10xQUANTITY

0.99+

appleORGANIZATION

0.99+

FebruaryDATE

0.99+

two chipsQUANTITY

0.99+

15 billionQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

Tower SemiconductorORGANIZATION

0.99+

M1 UltraCOMMERCIAL_ITEM

0.99+

2024DATE

0.99+

70 plus billion dollarQUANTITY

0.99+

last monthDATE

0.99+

A year agoDATE

0.99+

200 billion chipsQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

three yearsQUANTITY

0.99+

CHIPS ActTITLE

0.99+

second yearQUANTITY

0.99+

about $1.4 billionQUANTITY

0.99+

early 2010sDATE

0.99+

Breaking Analysis: Rethinking Data Protection in the 2020s


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is braking analysis with Dave Vellante. >> Techniques to protect sensitive data have evolved over thousands of years, literally. The pace of modern data protection is rapidly accelerating and presents both opportunities and threats for organizations. In particular, the amount of data stored in the cloud combined with hybrid work models, the clear and present threat of cyber crime, regulatory edicts, and the ever expanding edge and associated use cases should put CXOs on notice that the time is now to rethink your data protection strategies. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to explore the evolving world of data protection and share some information on how we see the market changing in the competitive landscape for some of the top players. Steve Kenniston, AKA the Storage Alchemist, shared a story with me, and it was pretty clever. Way back in 4000 BC, the Sumerians invented the first system of writing. Now, they used clay tokens to represent transactions at that time. Now, to prevent messing with these tokens, they sealed them in clay jars to ensure that the tokens, i.e the data, would remain secure with an accurate record that was, let's call it quasi, immutable, and lived in a clay vault. And since that time, we've seen quite an evolution of data protection. Tape, of course, was the main means of protecting data and backing data up during most of the mainframe era. And that carried into client server computing, which really accentuated and underscored the issues around backup windows and challenges with RTO, recovery time objective and RPO recovery point objective. And just overall recovery nightmares. Then in the 2000's data reduction made disk-based backup more popular and pushed tape into an archive last resort media. Data Domain, then EMC, now Dell still sell many purpose-built backup appliances as do others as a primary backup target disc-based. The rise of virtualization brought more changes in backup and recovery strategies, as a reduction in physical resources squeezed the one application that wasn't under utilizing compute, i.e, backup. And we saw the rise of Veem, the cleverly-named company that became synonymous with data protection for virtual machines. Now, the cloud has created new challenges related to data sovereignty, governance, latency, copy creep, expense, et cetera. But more recently, cyber threats have elevated data protection to become a critical adjacency to information security. Cyber resilience to specifically protect against attacks is the new trend being pushed by the vendor community as organizations are urgently looking for help with this insidious threat. Okay, so there are two major disruptors that we're going to talk about today, the cloud and cyber crime, especially around ransoming your data. Every customer is using the cloud in some way, shape, or form. Around 76% are using multiple clouds, that's according to a recent study by Hashi Corp. We've talked extensively about skill shortages on theCUBE, and data protection and security concerns are really key challenges to address, given that skill shortage is a real talent gap in terms of being able to throw people at solving this problem. So what customers are doing, they're either building out or they're buying really mostly building abstraction layers to hide the underlying cloud complexity. So what this does... The good news is it's simplifies provisioning and management, but it creates problems around opacity. In other words, you can't see sometimes what's going on with the data. These challenges fundamentally become data problems, in our view. Things like fast, accurate, and complete backup recovery, compliance, data sovereignty, data sharing. I mentioned copy creep, cyber resiliency, privacy protections. These are all challenges brought to fore by the cloud, the advantages, the pros, and the cons. Now, remote workers are especially vulnerable. And as clouds span rapidly, data protection technologies are struggling to keep pace. So let's talk briefly about the rapidly-expanding public cloud. This chart shows worldwide revenue for the big four hyperscalers. As you can see, we projected that they're going to surpass $115 billion in revenue in 2021. That's up from 86 billion last year. So it's a huge market, it's growing in the 35% range. The interesting thing is last year, 80-plus billion dollars in revenue, but 100 billion dollars was spent last year by these firms in cap ex. So they're building out infrastructure for the industry. This is a gift to the balance of the industry. Now to date, legacy vendors and the surrounding community have been pretty defensive around the cloud. Oh, not everything's going to move to the cloud. It's not a zero sum game we hear. And while that's all true, the narrative was really kind of a defensive posture, and that's starting to change as large tech companies like Dell, IBM, Cisco, HPE, and others see opportunities to build on top of this infrastructure. You certainly see that with Arvind Krishna comments at IBM, Cisco obviously leaning in from a networking and security perspective, HPE using language that is very much cloud-like with its GreenLake strategy. And of course, Dell is all over this. Let's listen to how Michael Dell is thinking about this opportunity when he was questioned on the queue by John Furrier about the cloud. Play the clip. So in my view, Michael nailed it. The cloud is everywhere. You have to make it easy. And you have to admire the scope of his comments. We know this guy, he thinks big. He said, "Enables everything." He's basically saying is that technology is at the point where it has the potential to touch virtually every industry, every person, every problem, everything. So let's talk about how this informs the changing world of data protection. Now, we all know, we've seen with the pandemic, there's an acceleration in toward digital, and that has caused an escalation, if you will, in the data protection mandate. So essentially what we're talking about here is the application of Michael Dell's cloud everywhere comments. You've got on-prem, private clouds, hybrid clouds. You've got public clouds across AWS, Azure, Google, Alibaba. Really those are the big four hyperscalers. You got many clouds that are popping up all their place. But multi-cloud, to that Hashi Corp data point, 75, 70 6%. And then you now see the cloud expanding out to the edge, programmable infrastructure heading out to the edge. So the opportunity here to build the data protection cloud is to have the same experiences across all these estates with automation and orchestration in that cloud, that data protection cloud, if you will. So think of it as an abstraction layer that hides that underlying complexity, you log into that data protection cloud, it's the same experience. So you've got backup, you've got recovery, you can handle bare metal. You can do virtualized backups and recoveries, any cloud, any OS, out to the edge, Kubernetes and container use cases, which is an emerging data protection requirement. And you've got analytics, perhaps you've got PII, personally identifiable information protection in there. So the attributes of this data protection cloud, again, abstracts the underlying cloud primitives, takes care of that. It also explodes cloud native technologies. In other words, it takes advantage of whether it's machine learning, which all the big cloud players have expertise in, new processor models, things like graviton, and other services that are in the cloud natively. It doesn't just wrap it's on-prem stack in a container and shove it into the cloud, no. It actually re architects or architects around those cloud native services. And it's got distributed metadata to track files and volumes and any organizational data irrespective of location. And it enables sets of services to intelligently govern in a federated governance manner while ensuring data integrity. And all this is automated and an orchestrated to help with the skills gap. Now, as it relates to cyber recovery, air-gap solutions must be part of the portfolio, but managed outside of that data protection cloud that we just briefly described. The orchestration and the management must also be gaped, if you will. Otherwise, (laughs) you don't have an air gap. So all of this is really a cohort to cyber security or your cybersecurity strategy and posture, but you have to be careful here because your data protection strategy could get lost in this mess. So you want to think about the data protection cloud as again, an adjacency or maybe an overlay to your cybersecurity approach. Not a bolt on, it's got to be fundamentally architectured from the bottom up. And yes, this is going to maybe create some overheads and some integration challenges, but this is the way in which we think you should think about it. So you'll likely need a partner to do this. Again, we come back to the skill skills gap if we're seeing the rise of MSPs, managed service providers and specialist service providers. Not public cloud providers. People are concerned about lock-in, and that's really not their role. They're not high-touch services company. Probably not your technology arms dealer, (clear throat) excuse me, they're selling technology to these MSPs. So the MSPs, they have intimate relationships with their customers. They understand their business and specialize in architecting solutions to handle these difficult challenges. So let's take a look at some of the risk factors here, dig a little bit into the cyber threat that organizations face. This is a slide that, again, the Storage Alchemists, Steve Kenniston, shared with me. It's based on a study that IBM funds with the Panmore Institute, which is a firm that studies these things like cost of breaches and has for many, many, many years. The slide shows the total cost of a typical breach within each dot and on the Y axis and the frequency in percentage terms on the horizontal axis. Now, it's interesting. The top two compromise credentials and phishing, which once again proves that bad user behavior trumps good security every time. But the point here is that the adversary's attack vectors are many. And specific companies often specialize in solving these problems often with point products, which is why the slide that we showed from Optiv earlier, that messy slide, looks so cluttered. So there's a huge challenge for companies. And that's why we've seen the emergence of cyber recovery solutions from virtually all the major players. Ransomware and the solar winds hack have made trust the number one issue for CIOs and CISOs and boards of directors. Shifting CISO spending patterns are clear. They're shifting largely because they're catalyzed by the work from home. But outside of the moat to endpoint security, identity and access management, cloud security, the horizontal network security. So security priorities and spending are changing. And that's why you see the emergence of disruptors like we've covered extensively, Okta, CrowdStrike, Zscaler. And cyber resilience is top of mind, and robust solutions are required. And that's why companies are building cyber recovery solutions that are most often focused on the backup corpus because that's a target for the bad guys. So there is an opportunity, however, to expand from just the backup corpus to all data and protect this kind of 3, 2, 1, or maybe it's 3, 2, 1, 1, three copies, two backups, a backup in the cloud and one that's air gaped. So this can be extended to primary storage, copies, snaps, containers, data in motion, et cetera, to have a comprehensive data protection strategy. And customers, as I said earlier, are increasingly looking to manage service providers and specialists because of that skills gap. And that's a big reason why automation is so important in orchestration. And automation and orchestration, I'll emphasize, on the air gap solutions should be separated physically and logically. All right, now let's take a look at some of the ETR data and some of the players. This is a chart that we like to show often. It's a X-Y axis. And the Y axis is net score, which is a measure of spending momentum. And the horizontal axis is market share. Now, market share is an indicator of pervasiveness in the survey. It's not spending market share, it's not market share of the overall market, it's a term that ETR uses. It's essentially market share of the responses within the survey set. Think of it as mind share. Okay, you've got the pure plays here on this slide, in the storage category. There is no data protection or backup category. So what we've done is we've isolated the pure plays or close to pure plays in backup and data protection. Now notice that red line, that red is kind of our subjective view of anything that's over that 40% line is elevated. And you can see only Rubrik, and the July survey is over that 40% line. I'll show you the ends in a moment. Smaller ends, but still, Rubrik is the only one. Now, look at Cohesity and Rubrik in the January 2020. So last year, pre-pandemic, Cohesity and Rubrik, they've come well off their peak for net score. Look at Veeam. Veeam, having studied this data for the last say 24 hours months, Veeam has been steady Eddy. It is really always in the mid to high 30s, always shows a large shared end, so it's coming up in the survey. Customers are mentioning Veeam. And it's got a very solid net score. It's not above that 40% line, but it's hovering just below consistently. That's very impressive. Commvault has steadily been moving up. Sanjay Mirchandani has made some acquisitions. He did the Hedvig acquisition. They launched Metallic, that's driving cloud affinity within Commvault's large customer base. So it's good example of a legacy player pivoting and evolving and transforming itself. Veritas, it continues to under perform in the ETR surveys relative to the other players. Now, for context, let's add IBM and Dell to the chart. Now just note, this is IBM and Dell's full storage portfolio. The category in the taxonomy at ETR is all storage. Just previous slide, I isolated on the pure plays. But this now adds in IBM and Dell. It probably representative of where they would be. Probably Dell larger on the horizontal axis than IBM, of course. And you could see the spending momentum accordingly. So you can see that in the data chart that we've inserted. So some smaller ends for Rubrik and Cohesity. But still enough to pay attention, it's not like one or two. When you're 20-plus, 15-plus 25-plus, you can start to pay attention to trends. Veeam, again, is very impressive. It's net score is solid, it's got a consistent presence in the dataset, it's clear leader here. SimpliVity is small, but it's improving relative to last several surveys. And we talked about Convolt. Now, I want to emphasize something that we've been hitting on for quite some time now. And that's the Renaissance that's coming in compute. Now, we all know about Moore's Law, the doubling of transistor density every two years, 18 to 24 months. And that leads to a doubling of performance in that timeframe. X86, that x86 curve is in the blue. And if you do the math, this is expressed in trillions of operations per second. The orange line is representative of Apples A series, culminating in the A15, most recently. The A series is what Apple is now... Well, it's the technology basis for what's inside M1, the new Apple laptops, which is replacing Intel. That's that that orange line there, we'll come back to that. So go back to the blue line for a minute. If you do the math on doubling performance every 24 months, it comes out to roughly 40% annual improvement in processing power per year. That's now moderated. So Moore's Law is waning in one sense, so we wrote a piece Moore's Law is not dead. So I'm sort of contradicting myself there. But the traditional Moore's Law curve on x86 is waning. It's probably now down to around 30%, low 30s. But look at the orange line. Again, using the A series as an indicator, if you combine then the CPU, the NPU, which neuro processing unit, XPU, pick whatever PU you want, the accelerators, the DSPs, that line is growing at 100% plus per year. It's probably more accurately around 110% a year. So there's a new industry curve occurring, and it's being led by the Arm ecosystem. The other key factor there, and you're seeing this in a lot of use cases, a lot of consumer use cases, Apple is an example, but you're also seeing it in things like Tesla, Amazon with AWS graviton, the Annapurna acquisition, building out graviton and nitro, that's based on Arm. You can get from design to tape out in less than two years. Whereas the Intel cycles, we know, they've been running it four to five years now. Maybe Pat Gelsinger is compressing those. But Intel is behind. So organizations that are on that orange curve are going to see faster acceleration, lower cost, lower power, et cetera. All right, so what's the tie to data protection. I'm going to leave you with this chart. Arm has introduced it's confidential, compute architecture and is ushering in a new era of security and data protection. Zero trust is the new mandate. And what Arm has it's done with what they call realms is create physical separation of the vulnerable components by creating essentially physical buckets to put code in and to put data in, separate from the OS. Remember, the OS is the most valuable entry point for hackers or one of them because it contains privileged access, and it's a weak link because of things like memory leakages and vulnerabilities. And malicious code can be placed by bad guys within data in the OS and appear benign, even though it's anything but. So in this, all the OS does is create API calls to the realm controller. That's the only interaction. So it makes it much harder for bad actors to get access to the code and the data. And importantly, very importantly, it's an end-to-end architecture. So there's protection throughout. If you're pulling data from the edge and bringing it back to the on-prem or the cloud, you've got that end to end architecture and protection throughout. So the link to data protection is that backup software vendors need to be the most trusted of applications. Backup software needs to be the most trusted of applications because it's one of the most targeted areas in a cyber attack. Realms provide an end-to-end separation of data and code from the OS and it's a better architectural construct to support zero trust and confidential computing and critical use cases like data protection/backup and other digital business apps. So our call to action is backup software vendors, you can lead the charge. Arm is several years ahead at the moment, ahead of Intel, in our view. So you've got to pay attention to that, research that. We're not saying over rotate, but go investigate that. And use your relationships with Intel to accelerate its version of this architecture. Or ideally, the industry should agree on common standards and solve this problem together. Pat Gelsinger told us in theCUBE that if it's the last thing he's going to do in his industry life, he's going to solve this security problem. That's when he was at VMware. Well, Pat, you're even in a better place to do it now. You don't have to solve it yourself, you can't, and you know that. So while you're going about your business saving Intel, look to partner with Arm. I know it sounds crazy to use these published APIs and push to collaborate on an open source architecture that addresses the cyber problem. If anyone can do it, you can. Okay, that's it for today. Remember, these episodes are all available as podcasts. All you got to do is search Braking Analysis Podcast. I publish weekly on wikibond.com and siliconangle.com. Or you can reach me @dvellante on Twitter, email me at david.vellante@siliconangle.com. And don't forget to check out etr.plus for all the survey and data action. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching, everybody. Be well, and we'll see you next time. (gentle music)

Published Date : Aug 13 2021

SUMMARY :

This is braking analysis So the link to data protection

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve KennistonPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

Dave VellantePERSON

0.99+

Michael DellPERSON

0.99+

Pat GelsingerPERSON

0.99+

John FurrierPERSON

0.99+

Steve KennistonPERSON

0.99+

DellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Sanjay MirchandaniPERSON

0.99+

CommvaultORGANIZATION

0.99+

January 2020DATE

0.99+

75QUANTITY

0.99+

PatPERSON

0.99+

Panmore InstituteORGANIZATION

0.99+

100%QUANTITY

0.99+

100 billion dollarsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

18QUANTITY

0.99+

AppleORGANIZATION

0.99+

20QUANTITY

0.99+

JulyDATE

0.99+

last yearDATE

0.99+

15QUANTITY

0.99+

$115 billionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Hashi Corp.ORGANIZATION

0.99+

2021DATE

0.99+

35%QUANTITY

0.99+

twoQUANTITY

0.99+

TeslaORGANIZATION

0.99+

fourQUANTITY

0.99+

AlibabaORGANIZATION

0.99+

HPEORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

less than two yearsQUANTITY

0.99+

Arvind KrishnaPERSON

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

80-plus billion dollarsQUANTITY

0.99+

Hashi CorpORGANIZATION

0.99+

oneQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

24 monthsQUANTITY

0.99+

40%QUANTITY

0.99+

A15COMMERCIAL_ITEM

0.99+

VeeamPERSON

0.99+

IntelORGANIZATION

0.99+

4000 BCDATE

0.98+

Moore's LawTITLE

0.98+

ConvoltORGANIZATION

0.98+

86 billionQUANTITY

0.98+

VMwareORGANIZATION

0.98+

first systemQUANTITY

0.98+

Storage AlchemistsORGANIZATION

0.98+

EMCORGANIZATION

0.98+

siliconangle.comOTHER

0.98+

pandemicEVENT

0.97+

HedvigORGANIZATION

0.97+

one senseQUANTITY

0.97+

MetallicORGANIZATION

0.97+

VeritasORGANIZATION

0.97+

ETRORGANIZATION

0.97+

OktaORGANIZATION

0.97+

ZscalerORGANIZATION

0.96+

@dvellantePERSON

0.96+

each dotQUANTITY

0.96+

24 hoursQUANTITY

0.95+

CrowdStrikeORGANIZATION

0.95+

EddyPERSON

0.95+

two backupsQUANTITY

0.95+

around 110% a yearQUANTITY

0.95+

Massimo Re Ferre, AWS | DockerCon 2021


 

>>Mhm. Yes. Hello. Welcome back to the cubes coverage of dr khan 2021 virtual. I'm john for your host of the cube. We're messing my fair principal technologist at AWS amazon Web services messman. Thank you for coming on the cube, appreciate it. Um >>Thank you. Thank you for having me. >>Great to see you love this amazon integration with doctor want to get into that in a second. Um Been great to see the amazon cloud native integration working well. E. C. S very popular. Every interview I've done at reinvent uh every year it gets better and better more adoption every year. Um Tell us what's going on with amazon E. C. S because you have Pcs anywhere and now that's being available. >>Yeah that's fine, that's correct, join and uh yeah so customers has been appreciating the value and the simplicity of VCS for many years now. I mean we we launched GCS back in 2014 and we have seen great adoption of the product and customers has always been appreciating. Uh the fact that it was easy to operate and easy to use. Uh This is a journey with the CS anywhere that started a few years ago actually. And we started this journey uh listening to customers that had particular requirements. Um I'd like to talk about, you know, the the law of the land and the law um uh of the physic where customers wanted to go all in into uh into the cloud, but they did have this exception that they need to uh deal with with the application that could not move to the cloud. So as I said, this journey started three years ago when we launched outpost. Um and outpost is our managed infrastructure that customers can deploy in their own data centers. And we supported Pcs on day one on outpost. Um having that said, there are lots of customers that came to us and said we love outputs but there are certain applications and certain requirements, uh such as compliance or the fact simply that we have like assets that we need to reuse in our data center uh that we want to use and before we move into into the cloud. So they were asking us, we love the simplicity of Vcs but we have to use gears that we have in our data center. That is when we started thinking about Pcs anywhere. So basically the idea of VCS anywhere is that you can use e c s E C as part of that, you know, and love um uh appreciated the simplicity of using Pcs but using your customer managed infrastructure as the data plane, basically what you could do is you can define your application within the Ec. S country plane and deploy those applications on customer own um infrastructure. What that means from a very practical perspective is that you can deploy this application on your managed infrastructure ranging from uh raspberry pis this is the demo that we show the invent when we pronounce um e c s anywhere all the way up to bare metal server, we don't really care about the infrastructure underneath. As long as it supported, the OS is supported. Um we're fine with that. >>Okay, so let's take this to the next level and actually the big theme at dr Connors developer experience, you know, that's kind of want to talk about that and obviously developer productivity and innovation have to go hand in hand. You don't want to stunt the innovation equation, which is cloud, native and scale. Right. So how does the developer experience improve with amazon ECs and anywhere now that I'm on, on premises or in the cloud? Can you take me through? What's the improvements around pcs and the developer? >>Yeah I would argue that the the what you see as anywhere solved is more for operational aspect and the requirements that more that are more akin to the operation team that that they need to meet. Uh We're working very hard to um to improve the developing experience on top of the CS beyond what we're doing with the CS anywhere. So um I'd like to step back a little bit and maybe tell a little bit of a story of why we're working on those things. So um the customer as I said before, continue to appreciate the simplicity and the easier views of E. C. S. However what we learn um over the years is that as we added more features to E. C. S, we ended up uh leveraging more easy. Um AWS services um example uh would be a load balancer integration or secret manager or Fc. Or um other things like service discovery that uses underneath other AWS products like um clubman for around 53. And what happened is that the end user experience, the developer experience became a little bit more complicated because now customers opportunity easy of use of these fully managed services. However they were responsible for time and watering all uh together in the application definition. So what we're working on to simplify this experience is we're working on tools that kind of abstract these um this verbal city that you get with pcs. Um uh An example is a confirmation template that a developer we need to use uh to deploy an application leveraging all of these features. Could then could end up being uh many hundreds of transformation lines um in the in the in the definition of the service. So we're working on new tools and new capabilities to make this experience better. Uh Some of them are C d k uh the copilot cli, dws, copilot cli those are all instruments and technologies and tools that we're building to abstract that um uh verbosity that I was alluding to and this is where actually also the doctor composed integration with the CS falls in. >>Yeah, I'm just gonna ask you that the doctor piece because actually it's dr khan all the developers love containers, they love what they do. Um This is a native, you know, mindset of shifting left with security. How is the relationship with the Docker container ecosystem going with you guys? Can you take him in to explain for the folks here watching this event and participating in the community, explain the relationship with Docker container specifically. >>Yeah, absolutely. Uh so basically we started working with dR many, many years ago, um uh Pcs was based on on DR technology when we launch it. Uh and it's still using uh DR technology and last year we started to collaborate with dR more closely um when DR releases the doctor composed specification um as an open source projects. So basically doctor is trying to use the doctor composed specification to create uh infrastructure product gnostic, uh way to deploy Docker application um uh using those specification in multiple infrastructure as part of these journey, we work with dr to support pcs as a back end um for um for the specification, basically what this means from a very practical perspective, is that you can take a doctor composed an existing doctor composed file. Um and doctor says that there are 650,000 doctor composed files spread across the top and all um uh lose control uh system um over the world. And basically you can take those doctor composed file and uh composed up and deploy transparently um into E. C. S Target on AWS. So basically if we go back to what I was alluding to before, the fact that the developer would need to author many 100 line of confirmation template to be able to take their application and deploy it into the cloud. What they need to do now is um offering a new file, a um a file uh with a very clear and easy to use dr composed syntax composed up and deploy automatically on AWS. Um and using Pcs Fargate um and many other AWS services in the back end. >>And what's the expectation in your mind as you guys look at the container service to anywhere model the on premise and without post, what does he what's the vision? Because that's again, another question mark for me, it's like, okay, I get it totally makes sense. Um, but containers are showing the mainstream enterprises, not the hyper skills. You guys always been kind of the forward thinkers, but you know, main street enterprise, I call it. They're picking up adoption of containers in a massive way. They're looking at cloud native specifically as the place for modern application development period. That's happening. What's the story? Say it again? Because I want to make sure I get this right e C s anywhere if I want to get on premises hybrid, What's it mean for me? >>Uh, this goes back to what I was saying at the beginning. So there are there are there when we have been discussing here are mostly to or token of things. Right. So the fact that we enable these big enterprises to meet their requirements and meet their um their um checkboxes sometimes to be able to deploy outside of AWS when there is a need to do that. This could be for edge use cases or for um using years that exist in the data center. So this is where e c s anywhere is basically trying, this is what uh pcs anywhere is trying to address. There is another orthogonal discussion which is developer experience, uh and that development experience is being addressed by these additional tools. Um what I like to say is that uh the confirmation is becoming a little bit like assembler in a sense, right? It's becoming very low level, super powerful, but very low level and we want to abstract and bring the experience to the next level and make it simple for developers to leverage the simplicity of some of these tools including Docker compose um and and and being able to deploy into the cloud um and getting all the benefits of the cloud scalability, electricity and security. >>I love the assembler analogy because you think about it. A lot of the innovation has been kind of like low level foundational and if you start to see all the open source activity and the customers, the tooling does matter. And I think that's where the ease of use comes in. So the simplicity totally makes sense. Um can you give an example of some simplicity piece? Because I think, you know, you guys, you know, look at looking at ec. S as the cornerstone for simplicity. I get that. Can you give an example to walk us through a day in the life of of an example >>uh in an example of simplicity? Yeah, supposedly in action. Yeah. Well, one of the examples that I usually do and there is this uh, notion of being served less and I think that there is a little bit of a, of an obsession around surveillance and trying to talk about surveillance for so many things. When I talk about the C. S, I like to use another moniker that is version less. So to me, simplicity also means that I do not have to um update my service. Right? So the way E C. S works is that engineering in the service team keeps producing and keeps delivering new features for PCS overnight for customers to wake up in the morning and consuming those features without having to deal with upgrades and updates. I think that this is a very key, um, very key example of simplicity when it comes to e C s that is very hard to find um in other, um, solutions whether there are on prime or in the cloud. >>That's a great example in one of the big complaints I hear just anecdotally around the industry is, you know, the speed of the minds of business, want the apps to move faster and the iteration with some craft obviously with security and making sure things buttoned up, but things get pulled back. It's almost slowed down because the speed of the innovation is happening faster than the compliance of some sort of old governance model or code reviews. I want to approve everything. So there's a balance between making sure what's approved, whether security or some pipeline procedures and what not. >>So that I could have. I cannot agree more with you. Yeah, no, it's absolutely true because I think that we see these very interesting um, uh, economy, I would say between startups moving super fast and enterprises try to move fast but forced to move at their own speed. So when we when we deliver services based on, for example, open source software uh, that customers need to um, look after in terms of upgrade to latest release. What we usually see is start up asking us can you move faster? There is a new version of that software, can you enable us to deploy that version? And then on the other hand of the spectrum, there are these big enterprises trying to move faster but not so much that are asking us can use lower. Can you slow down a little bit? Right, because I cannot keep that pigs. So it's a very it's a very interesting um, um, a very interesting time to be alive. >>You know, one of the, one of the things that pop up into these conversations when you talk, when I talk to VP of engineering of companies and then enterprises that the operational efficiency, you got developer productivity and you've got innovation right, you've got the three kind of things going on there knobs and they all have to turn up. People want more efficiency of the operations, they want more developed productivity and more innovation. What's interesting is you start seeing, okay, it's not that easy. There's also a team formation and I know Andy Jassy kinda referred to this in his keynote at Reinvent last year around thinking differently around your organizational but you know, that could be applied to technologists too. So I'd love to get your thoughts while you're here. I know you blog about this and you tweet about this but this is kind of like okay if these things are all going to be knobs, we turned up innovation efficiency, operationally and develop productivity. What's the makeup of the team? Because some are saying, you have an SRE embedded, you've got the platform engineering, you've got version lists, you got survival is all these things are going on all goodness. But does that mean that the teams have to change? What's your thoughts on that you want to get your perspective? >>Yeah, no, absolutely. I think that there was a joke going around that um as soon as you see a job like VP of devoPS, I mean that is not going to work, right? Because these things are needs to be like embedded into each team, right? There shouldn't be a DEVOPS team or anything, it would be just a way of working. And I totally agree with you that these knobs needs to go insane, right? And you cannot just push too hard on innovation which are not having um other folks um to uh to be able to, you know, keep that pace um with you. And we're trying to health customers with multiple uh tools and services to try to um have not only developers and making developer experience uh better but also helping people that are building these underneath platforms. Like for example, prod on AWS protein is a good example of this, where we're focusing on helping these um teams that are trying to build platforms because they are not looking themselves as being a giant or very fast. But they're they're they're measured on being secure, being compliant and being, you know, within a guardrail uh that an enterprise um regulated enterprise needs to have. So we need to have all of these people um both organizationally as well as with providing tools and technologies that have them in their specific areas um to succeed. >>Yeah. And what's interesting about all this is that you know I think we're also having conversations and and again you're starting to see things more clearly here at dr khan we saw some things that coop con which the joke there was not joke but the observation was it's less about kubernetes which is now becoming boring, lee reliable to more about cloud native applications under the covers with program ability. So as all this is going on there truly is a flip of the script. You can actually re engineer and re factor everything, not just re platform your applications in I. T. At once. Right now there's a window whether it's security or whatever. Now that the containers and and the doctor ecosystem and the container ecosystem and the The kubernetes, you've got KS and you got six far gay and all the stuff of goodness. Companies can actually do this right now. They can actually change everything. This is a unique time. This window might close are certainly changed if you're not on it now, it's the same argument of the folks who got caught in the pandemic and weren't in the cloud got flat footed. So you're seeing that example of if you weren't in the cloud up during the pandemic before the pandemic, you were probably losing during the pandemic, the ones that one where the already guys are in the cloud. Now the same thing is true with cloud native. You're not getting into it now, you're probably gonna be on the wrong side of history. What's your reaction to that? >>Yeah, No, I I I agree totally. I I like to think about this. I usually uh talk about this if I can stay back step back a little bit and I think that in this industry and I have gray areas and I have seen lots of things, I think that there has been too big Democratisation event in 90 that happened and occurred in the last 30 years. So the first one was from, you know from when um the PC technology has been introduced, distributed computing from the mainframe area and that was the first Democratisation step. Right? So everyone had access to um uh computers so they could do things if you if you fast forward to these days. Um uh what happened is that on top of that computer, whatever that became a server or whatever, there is a state a very complex stack of technologies uh that allow you to deployment and develop and deploy your application. Right. But that stack of technology and the complexity of that stack of technology is daunting in some way. Right? So it is in a bit access and democratic access to technology. So to me this is what cloud enabled, Right? So the next step of democratisation was the introduction of services that allow you to bypass that stack, which we call undifferentiated heavy lifting because you know, um you don't get paid for managing, I don't know any M. R. Server or whatever, you get paid for extracting values through application logic from that big stack. So I totally agree with you that we're in a unique position to enable everyone um with what we're building uh to innovate a lot faster and in a more secure way. >>Yeah. And what comes out, I totally agree. And I think that's a great historical view and I think let's bring this down to the present today and then bring this as the as the bridge to the future. If you're a developer you could. And by the way, no matter whether you're programming infrastructure or just writing software or even just calling a PS and rolling your own, composing your services, it's programmable and it's just all accessible. So I think that that's going to change the again back to the three knobs, developer productivity or just people productivity, operational efficiency, which is scale and then innovation, which is the business logic where I think machine learning starts to come in, right? So if you can get the container thing going, you start tapping into that control plane. It's not so much just the data control plane. It's like a software control plane. >>Yeah, no, absolutely. The fact that you can, I mean as I said, I have great hair. So I've seen a lot of things and back in the days, I mean the, I mean the whole notion of being able to call an api and get 10 servers for example or today, 10 containers. It would be like, you know, almost a joke, right? So we spent a lot of time racking and um, and doing so much manual stuff that was so ever prone because we usually talk about velocity and agility, but we, we rarely talk about, you know, the difficulties and the problems that doing things manually introduced in the process, the way that you can get wrong. >>You know, you know, it reminds me of this industry and I was like finally get off my lawn in the old days. I walk to school with no shoes on in the snow. We had to build our own colonel and our own graphics libraries and then now they have all these tools. It's like, you're just an old, you know, coder, but joking aside, you know that experience, you're bringing up appointments for the younger generation who have never loaded a Linux operating system before or had done anything like that level. It's not so much old versus young, it's more of a systems thinking, he said distributed computing. If you look at all the action, it's essentially distributed computing with new software paradigm and it's a system architecture. It's not so much software engineering, software developer, you know, this that it's just basically all engineering at this point, all software. >>It is, it is very much indeed. It's uh, it's whole software, there is no other um, there is no other way to call it. It's um, I mean we go back to talk about, you know, infrastructure as code and everything is now uh corridor software in in in a way. It's, yeah. >>This is great to have you on. Congratulations. A CS anywhere being available. It's great stuff. Um, and great to see you and, and great to have this conversation. Um, amazon web services obviously, uh, the world has has gone super cloud. Uh, now you have distributed computing with edge iot exploding beautifully, which means a lot of new opportunities. So thanks for coming on. >>Thank you very much for having me. It was a pleasure. Okay, cube >>Coverage of Dr Khan 2021 virtual. This is the Cube. I'm John for your host. Thanks for watching.

Published Date : May 28 2021

SUMMARY :

Thank you for coming on the cube, appreciate it. Thank you for having me. Great to see you love this amazon integration with doctor want to get into that in a second. So basically the idea of VCS anywhere is that you can use e c s E C So how does the developer experience improve with amazon city that you get with pcs. How is the relationship with the Docker container is that you can take a doctor composed an existing doctor composed file. You guys always been kind of the forward thinkers, but you know, main street enterprise, So the fact that we enable these big enterprises to meet their requirements I love the assembler analogy because you think about it. When I talk about the C. S, I like to use another moniker that you know, the speed of the minds of business, want the apps to move faster and the iteration with What we usually see is start up asking us can you move faster? mean that the teams have to change? And I totally agree with you that these knobs needs Now that the containers and and the doctor ecosystem and the container ecosystem and the introduction of services that allow you to bypass that stack, So if you can get the container thing going, you start tapping into in the process, the way that you can get wrong. You know, you know, it reminds me of this industry and I was like finally get off my lawn in the old days. It's um, I mean we go back to talk about, you know, infrastructure as code Um, and great to see you and, and great to have this conversation. Thank you very much for having me. This is the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2014DATE

0.99+

10 serversQUANTITY

0.99+

amazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

10 containersQUANTITY

0.99+

JohnPERSON

0.99+

Andy JassyPERSON

0.99+

todayDATE

0.99+

Massimo Re FerrePERSON

0.99+

last yearDATE

0.99+

LinuxTITLE

0.99+

100 lineQUANTITY

0.99+

each teamQUANTITY

0.99+

firstQUANTITY

0.98+

pandemicEVENT

0.98+

bothQUANTITY

0.98+

three years agoDATE

0.98+

oneQUANTITY

0.98+

threeQUANTITY

0.97+

three knobsQUANTITY

0.97+

DockerConEVENT

0.96+

around 53QUANTITY

0.95+

last 30 yearsDATE

0.94+

e C sTITLE

0.93+

many years agoDATE

0.93+

hundredsQUANTITY

0.93+

ReinventEVENT

0.92+

johnPERSON

0.91+

DockerTITLE

0.91+

sixQUANTITY

0.9+

first oneQUANTITY

0.89+

few years agoDATE

0.89+

90DATE

0.88+

Pcs FargateTITLE

0.85+

2021DATE

0.85+

GCSTITLE

0.85+

650,000 doctorQUANTITY

0.85+

E C. STITLE

0.83+

day oneQUANTITY

0.81+

Dr KhanPERSON

0.78+

primeCOMMERCIAL_ITEM

0.75+

C. STITLE

0.74+

dr ConnorsPERSON

0.72+

a dayQUANTITY

0.66+

PcsORGANIZATION

0.63+

dr khanORGANIZATION

0.6+

DEVOPSORGANIZATION

0.59+

law ofTITLE

0.57+

outpostORGANIZATION

0.54+

secondQUANTITY

0.51+

dr khanPERSON

0.49+

devoPSORGANIZATION

0.47+

raspberry pisORGANIZATION

0.46+

VCSORGANIZATION

0.44+

VCSTITLE

0.34+

Parul Singh, Luke Hinds & Stephan Watt, Red Hat | Red Hat Summit 2021 Virtual Experience


 

>>mhm Yes. >>Welcome back to the Cube coverage of Red Hat summit 21 2021. I'm john for host of the Cubans virtual this year as we start preparing to come out of Covid a lot of great conversations here happening around technology. This is the emerging technology with Red hat segment. We've got three great guests steve watt manager, distinguished engineer at Red Hat hurl saying senior software engineer Red Hat and luke Hines, who's the senior software engineer as well. We got the engineering team steve, you're the the team leader, emerging tech within red hat. Always something to talk about. You guys have great tech chops that's well known in the industry and I'll see now part of IBM you've got a deep bench um what's your, how do you view emerging tech um how do you apply it? How do you prioritize, give us a quick overview of the emerging tech scene at Redhead? >>Yeah, sure. It's quite a conflated term. The way we define emerging technologies is that it's a technology that's typically 18 months plus out from commercialization and this can sometimes go six months either way. Another thing about it is it's typically not something on any of our product roadmaps within the portfolio. So in some sense, it's often a bit of a surprise that we have to react to. >>So no real agenda. And I mean you have some business unit kind of probably uh but you have to have first principles within red hat, but for this you're looking at kind of the moon shot, so to speak, the big game changing shifts. Quantum, you know, you got now supply chain from everything from new economics, new technology because that kind of getting it right. >>Yeah, I think we we definitely use a couple of different techniques to prioritize and filter what we're doing. And the first is something will pop up and it will be like, is it in our addressable market? So our addressable market is that we're a platform software company that builds enterprise software and so, you know, it's got to be sort of fit into that is a great example if somebody came up came to us with an idea for like a drone command center, which is a military application, it is an emerging technology, but it's something that we would pass on. >>Yeah, I mean I didn't make sense, but he also, what's interesting is that you guys have an open source D N A. So it's you have also a huge commercial impact and again, open sources of one of the 4th, 5th generation of awesomeness. So, you know, the good news is open source is well proven. But as you start getting into this more disruption, you've got the confluence of, you know, core cloud, cloud Native, industrial and IOT edge and data. All this is interesting, right. This is where the action is. How do you guys bring that open source community participation? You got more stakeholders emerging there before the break down, how that you guys manage all that complexity? >>Yeah, sure. So I think that the way I would start is that, you know, we like to act on good ideas, but I don't think good ideas come from any one place. And so we typically organize our teams around sort of horizontal technology sectors. So you've got, you know, luke who's heading up security, but I have an edge team, cloud networking team, a cloud storage team. Cloud application platforms team. So we've got these sort of different areas that we sort of attack work and opportunities, but you know, the good ideas can come from a variety of different places. So we try and leverage co creation with our customers and our partners. So as a good example of something we had to react to a few years ago, it was K Native right? So the sort of a new way of doing service um and eventing on top of kubernetes that was originated from google. Whereas if you look at Quantum right, ibms, the actual driver on quantum science and uh that originated from IBM were parole. We'll talk about exactly how we chose to respond to that. Some things are originated organically within the team. So uh luke talking about six law is a great example of that, but we do have a we sort of use the addressable market as a way to sort of focus what we're doing and then we try and land it within our different emerging technologies teams to go tackle it. Now. You asked about open source communities, which are quite interesting. Um so typically when you look at an open source project, it's it's there to tackle a particular problem or opportunity. Sometimes what you actually need commercial vendors to do is when there's a problem or opportunity that's not tackled by anyone open source project, we have to put them together to create a solution to go tackle that thing. That's also what we do. And so we sort of create this bridge between red hat and our customers and multiple different open source projects. And this is something we have to do because sometimes just that one open source project doesn't really care that much about that particular problem. They're motivated elsewhere. And so we sort of create that bridge. >>We got two great uh cohorts here and colleagues parole on the on the Quantum side and you got luke on the security side. Pro I'll start with you. Quantum is also a huge mentioned IBM great leadership there. Um Quantum on open shift. I mean come on. Just that's not coming together for me in my mind, it's not the first thing I think of. But it really that sounds compelling. Take us through, you know, um how this changes the computing landscape because heterogeneous systems is what we want and that's the world we live in. But now with distributed systems and all kinds of new computing modules out there, how does this makes sense? Take us through this? >>Um yeah john's but before I think I want to explain something which is called Quantum supremacy because it plays very important role in the road map that's been working on. So uh content computers, they are evolving and they have been around. But right now you see that they are going to be the next thing. And we define quantum supremacy as let's say you have any program that you run or any problems that you solve on a classical computer. Quantum computer would be giving you the results faster. So that is uh, that is how we define content supremacy when the same workload are doing better on content computer than they do in a classical computer. So the whole the whole drive is all the applications are all the companies, they're trying to find avenues where Quantum supremacy are going to change how they solve problems or how they run their applications. And even though quantum computers they are there. But uh, it is not as easily accessible for everyone to consume because it's it's a very new area that's being formed. So what, what we were thinking, how we can provide a mechanism that you can you don't connect this deal was you have a classical world, you have a country world and that's where a lot of thought process been. And we said okay, so with open shift we have the best of the classical components. You can take open shift, you can develop, deploy around your application in a country raised platform. What about you provide a mechanism that the world clothes that are running on open shift. They are also consuming quantum resources or they are able to run the competition and content computers take the results and integrate them in their normal classical work clothes. So that is the whole uh that was the whole inception that we have and that's what brought us here. So we took an operator based approach and what we are trying to do is establish the best practices that you can have these heterogeneous applications that can have classical components. Talking to our interacting the results are exchanging data with the quantum components. >>So I gotta ask with the rise of containers now, kubernetes at the center of the cloud native value proposition, what work clothes do you see benefiting from the quantum systems the most? Is there uh you guys have any visibility on some of those workloads? >>Uh So again, it's it's a very new, it's very it's really very early in the time and uh we talk with our customers and every customers, they are trying to identify themselves first where uh these contacts supremacy will be playing the role. What we are trying to do is when they reach their we should have a solution that they that they could uh use the existing in front that they have on open shift and use it to consume the content computers that may or may not be uh, inside their own uh, cloud. >>Well I want to come back and ask you some of the impact on the landscape. I want to get the look real quick because you know, I think security quantum break security, potentially some people have been saying, but you guys are also looking at a bunch of projects around supply chain, which is a huge issue when it comes to the landscape, whether its components on a machine in space to actually handling, you know, data on a corporate database. You guys have sig store. What's this about? >>Sure. Yes. So sick store a good way to frame six store is to think of let's encrypt and what let's encrypt did for website encryption is what we plan to do for software signing and transparency. So six Door itself is an umbrella organization that contains various different open source projects that are developed by the Six door community. Now, six door will be brought forth as a public good nonprofit service. So again, we're very much basing this on the successful model of let's Encrypt Six door will will enable developers to sign software artifacts, building materials, containers, binaries, all of these different artifacts that are part of the software supply chain. These can be signed with six door and then these signing events are recorded into a technology that we call a transparency log, which means that anybody can monitor signing events and a transparency log has this nature of being read only and immutable. It's very similar to a Blockchain allows you to have cryptographic proof auditing of our software supply chain and we've made six stores so that it's easy to adopt because traditional cryptographic signing tools are a challenge for a lot of developers to implement in their open source projects. They have to think about how to store the private keys. Do they need specialist hardware? If they were to lose a key then cleaning up afterwards the blast radius. So the key compromise can be incredibly difficult. So six doors role and purpose essentially is to make signing easy easy to adopt my projects. And then they have the protections around there being a public transparency law that could be monitored. >>See this is all about open. Being more open. Makes it more secure. Is the >>thief? Very much yes. Yes. It's that security principle of the more eyes on the code the better. >>So let me just back up, is this an open, you said it's gonna be a nonprofit? >>That's correct. Yes. Yes. So >>all of the code is developed by the community. It's all open source. anybody can look at this code. And then we plan alongside the Linux Foundation to launch a public good service. So this will make it available for anybody to use if your nonprofit free to use service. >>So luke maybe steve if you can way into on this. I mean, this goes back. If you look back at some of the early cloud days, people were really trashing cloud as there's no security. And cloud turns out it's a more security now with cloud uh, given the complexity and scale of it, does that apply the same here? Because I feel this is a similar kind of concept where it's open, but yet the more open it is, the more secure it is. And then and then might have to be a better fit for saying I. T. Security solution because right now everyone is scrambling on the I. T. Side. Um whether it's zero Trust or Endpoint Protection, everyone's kind of trying everything in sight. This is kind of changing the paradigm a little bit on software security. Could you comment on how you see this playing out in traditional enterprises? Because if this plays out like the cloud, open winds, >>so luke, why don't you take that? And then I'll follow up with another lens on it which is the operate first piece. >>Sure. Yes. So I think in a lot of ways this has to be open this technology because this way we have we have transparency. The code can be audited openly. Okay. Our operational procedures can be audit openly and the community can help to develop not only are code but our operational mechanisms so we look to use technology such as cuba netease, open ship operators and so forth. Uh Six store itself runs completely in a cloud. It is it is cloud native. Okay, so it's very much in the paradigm of cloud and yeah, essentially security, always it operates better when it's open, you know, I found that from looking at all aspects of security over the years that I've worked in this realm. >>Okay, so just just to add to that some some other context around Six Law, that's interesting, which is, you know, software secure supply chain, Sixth floor is a solution to help build more secure software secure supply chains, more secure software supply chain. And um so um there's there's a growing community around that and there's an ecosystem of sort of cloud native kubernetes centric approaches for building more secure software. I think we all caught the solar winds attack. It's sort of enterprise software industry is responding sort of as a whole to go and close out as many of those gaps as possible, reduce the attack surface. So that's one aspect about why 6th was so interesting. Another thing is how we're going about it. So we talked about um you mentioned some of the things that people like about open source, which is one is transparency, so sunlight is the best disinfectant, right? Everybody can see the code, we can kind of make it more secure. Um and then the other is agency where basically if you're waiting on a vendor to go do something, um if it's proprietary software, you you really don't have much agency to get that vendor to go do that thing. Where is the open source? If you don't, if you're tired of waiting around, you can just submit the patch. So, um what we've seen with package software is with open source, we've had all this transparency and agency, but we've lost it with software as a service, right? Where vendors or cloud service providers are taking package software and then they're making it available as a service but that operationalize ng that software that is proprietary and it doesn't get contributed back. And so what Lukes building here as long along with our partners down, Lawrence from google, very active contributor in it. Um, the, is the operational piece to actually run sixth or as a public service is part of the open source project so people can then go and take sixth or maybe run it as a smaller internal service. Maybe they discover a bug, they can fix that bug contributed back to the operational izing piece as well as the traditional package software to basically make it a much more robust and open service. So you bring that transparency and the agency back to the SAS model as well. >>Look if you don't mind before, before uh and this segment proportion of it. The importance of immune ability is huge in the world of data. Can you share more on that? Because you're seeing that as a key part of the Blockchain for instance, having this ability to have immune ability. Because you know, people worry about, you know, how things progress in this distributed world. You know, whether from a hacking standpoint or tracking changes, Mutability becomes super important and how it's going to be preserved in this uh new six doorway. >>Oh yeah, so um mutability essentially means cannot be changed. So the structure of something is set. If it is anyway tampered or changed, then it breaks the cryptographic structure that we have of our public transparency service. So this way anybody can effectively recreate the cryptographic structure that we have of this public transparency service. So this mutability provides trust that there is non repudiation of the data that you're getting. This data is data that you can trust because it's built upon a cryptographic foundation. So it has very much similar parallels to Blockchain. You can trust Blockchain because of the immutable nature of it. And there is some consensus as well. Anybody can effectively download the Blockchain and run it themselves and compute that the integrity of that system can be trusted because of this immutable nature. So that's why we made this an inherent part of Six door is so that anybody can publicly audit these events and data sets to establish that there tamper free. >>That is a huge point. I think one of the things beyond just the security aspect of being hacked and protecting assets um trust is a huge part of our society now, not just on data but everything, anything that's reputable, whether it's videos like this being deep faked or you know, or news or any information, all this ties to security again, fundamentally and amazing concepts. Um I really want to keep an eye on this great work. Um Pearl, I gotta get back to you on Quantum because again, you can't, I mean people love Quantum. It's just it feels like so sci fi and it's like almost right here, right, so close and it's happening. Um And then people get always, what does that mean for security? We go back to look and ask them well quantum, you know, crypto But before we get started I wanted, I'm curious about how that's gonna play out from the project because is it going to be more part of like a C. N. C. F. How do you bring the open source vibe to Quantum? >>Uh so that's a very good question because that was a plan, the whole work that we are going to do related to operators to enable Quantum is managed by the open source community and that project lies in the casket. So casket has their own open source community and all the modification by the way, I should first tell you what excuse did so cute skin is the dedicate that you use to develop circuits that are run on IBM or Honeywell back in. So there are certain Quantum computers back and that support uh, circuits that are created using uh Houston S ticket, which is an open source as well. So there is already a community around this which is the casket. Open source community and we have pushed the code and all the maintenance is taken care of by that community. Do answer your question about if we are going to integrate it with C and C. F. That is not in the picture right now. We are, it has a place in its own community and it is also very niche to people who are working on the Quantum. So right now you have like uh the contributors who who are from IBM as well as other uh communities that are specific specifically working on content. So right now I don't think so, we have the map to integrated the C. N. C. F. But open source is the way to go and we are on that tragic Torri >>you know, we joke here the cube that a cubit is coming around the corner can can help but we've that in you know different with a C. But um look, I want to ask you one of the things that while you're here your security guru. I wanted to ask you about Quantum because a lot of people are scared that Quantum is gonna crack all the keys on on encryption with his power and more hacking. You're just comment on that. What's your what's your reaction to >>that? Yes that's an incredibly good question. This will occur. Okay. And I think it's really about preparation more than anything now. One of the things that we there's a principle that we have within the security world when it comes to coding and designing of software and this aspect of future Cryptography being broken. As we've seen with the likes of MD five and Sha one and so forth. So we call this algorithm agility. So this means that when you write your code and you design your systems you make them conducive to being able to easily swap and pivot the algorithms that use. So the encryption algorithms that you have within your code, you do not become too fixed to those. So that if as computing gets more powerful and the current sets of algorithms are shown to have inherent security weaknesses, you can easily migrate and pivot to a stronger algorithms. So that's imperative. Lee is that when you build code, you practice this principle of algorithm agility so that when shot 256 or shot 5 12 becomes the shar one. You can swap out your systems. You can change the code in a very least disruptive way to allow you to address that floor within your within your code in your software projects. >>You know, luke. This is mind bender right there. Because you start thinking about what this means is when you think about algorithmic agility, you start thinking okay software countermeasures automation. You start thinking about these kinds of new trends where you need to have that kind of signature capability. You mentioned with this this project you're mentioning. So the ability to actually who signs off on these, this comes back down to the paradigm that you guys are talking about here. >>Yes, very much so. There's another analogy from the security world, they call it turtles all the way down, which is effectively you always have to get to the point that a human or a computer establishes that first point of trust to sign something off. And so so it is it's a it's a world that is ever increasing in complexity. So the best that you can do is to be prepared to be as open as you can to make that pivot as and when you need to. >>Pretty impressive, great insight steve. We can talk for hours on this panel, emerging tech with red hat. Just give us a quick summary of what's going on. Obviously you've got a serious brain trust going on over there. Real world impact. You talk about the future of trust, future of software, future of computing, all kind of going on real time right now. This is not so much R and D as it is the front range of tech. Give us a quick overview of >>Yeah, sure, yeah, sure. The first thing I would tell everyone is go check out next that red hat dot com, that's got all of our different projects, who to contact if you're interested in learning more about different areas that we're working on. And it also lists out the different areas that we're working on, but just as an overview. So we're working on software defined storage, cloud storage. Sage. Well, the creator of Cf is the person that leads that group. We've got a team focused on edge computing. They're doing some really cool projects around um very lightweight operating systems that and kubernetes, you know, open shift based deployments that can run on, you know, devices that you screw into the sheet rock, you know, for that's that's really interesting. Um We have a cloud networking team that's looking at over yin and just intersection of E B P F and networking and kubernetes. Um and then uh you know, we've got an application platforms team that's looking at Quantum, but also sort of how to advance kubernetes itself. So that's that's the team where you got the persistent volume framework from in kubernetes and that added block storage and object storage to kubernetes. So there's a lot of really exciting things going on. Our charter is to inform red hats long term technology strategy. We work the way my personal philosophy about how we do that is that Red hat has product engineering focuses on their product roadmap, which is by nature, you know, the 6 to 9 months. And then the longer term strategy is set by both of us. And it's just that they're not focused on it. We're focused on it and we spend a lot of time doing disambiguate nation of the future and that's kind of what we do. We love doing it. I get to work with all these really super smart people. It's a fun job. >>Well, great insights is super exciting, emerging tack within red hat. I'll see the industry. You guys are agile, your open source and now more than ever open sources, uh, product Ization of open source is happening at such an accelerated rate steve. Thanks for coming on parole. Thanks for coming on luke. Great insight all around. Thanks for sharing. Uh, the content here. Thank you. >>Our pleasure. >>Thank you. >>Okay. We were more, more redhead coverage after this. This video. Obviously, emerging tech is huge. Watch some of the game changing action here at Redhead Summit. I'm john ferrier. Thanks for watching. Yeah.

Published Date : Apr 28 2021

SUMMARY :

This is the emerging technology with Red So in some sense, it's often a bit of a surprise that we have to react to. And I mean you have some business unit kind of probably uh but you have to have first principles you know, it's got to be sort of fit into that is a great example if somebody came up came to us with an So it's you have also a huge commercial impact and again, open sources of one of the 4th, So I think that the way I would start is that, you know, side and you got luke on the security side. And we define quantum supremacy as let's say you have really very early in the time and uh we talk with our customers and I want to get the look real quick because you know, It's very similar to a Blockchain allows you to have cryptographic proof Is the the code the better. all of the code is developed by the community. So luke maybe steve if you can way into on this. so luke, why don't you take that? you know, I found that from looking at all aspects of security over the years that I've worked in this realm. So we talked about um you mentioned some of the things that Because you know, people worry about, you know, how things progress in this distributed world. effectively recreate the cryptographic structure that we have of this public We go back to look and ask them well quantum, you know, crypto But So right now you have like uh the contributors who who are from in you know different with a C. But um look, I want to ask you one of the things that while you're here So the encryption algorithms that you have within your code, So the ability to actually who signs off on these, this comes back So the best that you can do is to be prepared to be as open as you This is not so much R and D as it is the on their product roadmap, which is by nature, you know, the 6 to 9 months. I'll see the industry. Watch some of the game changing action here at Redhead Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
john ferrierPERSON

0.99+

Stephan WattPERSON

0.99+

luke HinesPERSON

0.99+

IBMORGANIZATION

0.99+

Luke HindsPERSON

0.99+

stevePERSON

0.99+

six monthsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

Parul SinghPERSON

0.99+

6QUANTITY

0.99+

HoneywellORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

LawrencePERSON

0.99+

Linux FoundationORGANIZATION

0.99+

six storesQUANTITY

0.99+

RedheadORGANIZATION

0.99+

4thQUANTITY

0.99+

Six doorORGANIZATION

0.99+

twoQUANTITY

0.99+

first pieceQUANTITY

0.99+

six DoorORGANIZATION

0.99+

six doorsQUANTITY

0.99+

sixthQUANTITY

0.99+

red hat dot comORGANIZATION

0.99+

Redhead SummitEVENT

0.99+

bothQUANTITY

0.99+

googleORGANIZATION

0.98+

9 monthsQUANTITY

0.98+

OneQUANTITY

0.98+

LeePERSON

0.98+

firstQUANTITY

0.98+

red hatsORGANIZATION

0.98+

oneQUANTITY

0.98+

six doorORGANIZATION

0.98+

Red hatORGANIZATION

0.96+

LukesPERSON

0.96+

lukePERSON

0.96+

red hatORGANIZATION

0.96+

first principlesQUANTITY

0.95+

johnPERSON

0.95+

first thingQUANTITY

0.95+

Six LawTITLE

0.95+

PearlPERSON

0.94+

Red hatORGANIZATION

0.92+

six doorwayQUANTITY

0.92+

Sixth floorQUANTITY

0.92+

first pointQUANTITY

0.91+

6thQUANTITY

0.91+

few years agoDATE

0.89+

SixQUANTITY

0.88+

5th generationQUANTITY

0.88+

steve wattPERSON

0.86+

cuba neteaseORGANIZATION

0.85+

CfORGANIZATION

0.84+

three great guestsQUANTITY

0.84+

Six storeORGANIZATION

0.82+

this yearDATE

0.82+

ibmsORGANIZATION

0.82+

Red Hat Summit 2021 VirtualEVENT

0.82+

CubeORGANIZATION

0.81+

TorriPERSON

0.8+

redheadORGANIZATION

0.79+

Red Hat summit 21EVENT

0.79+

CubansPERSON

0.76+

SagePERSON

0.76+

one placeQUANTITY

0.72+

shot 5 12OTHER

0.71+

ShaPERSON

0.69+

cohortsQUANTITY

0.66+

C. N.TITLE

0.65+

K NativeORGANIZATION

0.62+

zero TrustQUANTITY

0.61+

six lawQUANTITY

0.6+

six storeORGANIZATION

0.57+

Sandeep Singh, HPE


 

(upbeat music) >> Hi everybody, this is Dave Volante. And with me is Sandeep Singh, he is the vice president of Storage Marketing at Hewlett Packard Enterprise. And we're going to riff on some of the trends in the industry, what we're seeing. And we got a little treat for you. Sandeep, great to see you man. >> Dave, it's a pleasure to be here. >> You and I've known each other for a long time. We've had some great discussions, some debates, some intriguing mind benders. What are you seeing out there in Storage? So much has changed. What are the key trends you're seeing and let's get into it. >> Yeah, across the board, as you said, so much has changed. When you reflect back at the underlying transformation that's taken place with data, cloud and AI across the board. First of all, for our customers they're seeing this massive data explosion that literally now spans edge to core to cloud. They're also seeing a diversity of the application workloads across the board. And the emphasis that it's placing is on the complexity that underlies overall infrastructure and data management. Across the board, we're hearing a lot from customers about just the underlying infrastructure complexity and the infrastructure sprawl. And then the second element of that is really extending into the complexity of data management. >> So it's interesting you're talking about data management. You remember you and I, we were in Andover. It was probably like five years ago and all we were talking about was media. Flash this and flash that, and at the time that was kind of the hot storage topic. Well, flash came in addressing some of the mics that we historically talked about it. Now the problem statement is really kind of quote unquote metaphorically moving up the stack if you will, you mentioned management but let's dig into that a little bit. I mean, what is management? I mean, a lot of people that means different things to different people. You talk to a database person or a backup person. How do you look at management? What does that mean to you? >> Yeah, Dave, you mentioned that the flash came in and it actually accelerated the overall speed and latency that storage was delivering to the application workloads. But fundamentally when you look back at storage over a couple of decades the underlying way of how you're managing storage hasn't fundamentally changed. There's still an incredible amount of complexity for IT. It's still a manual admin driven experience for customers. And what that's translating to is more often than not IT is in the world of firefighting and it's leaves them unable to help with them more strategic projects to innovate for the business. And basically IT has that pressure point of moving beyond that and helping bring greater levels of agility that line of business owners are asking for and to be able to deliver on more of the strategic projects. So that's one element of it. The second element that we're hearing from customers about is as more and more data just continues to explode from edge to core to cloud. And as basically the infrastructure has grown from just being on-Prem to being at the Edge to being in the cloud. Now that complexity is expanding from just being on-Prem to across multiple different clouds. So when you look across the date data life cycle how do you store it? How do you secure it? How do you basically protect it and archive it and analyze that data. That end to end life cycle management of data today resides on just a fragmented set of overall infrastructure and tools and processes and administrative boundaries. That's creating a massive challenge for customers. And the impact of that ultimately is essentially comes at a cost to agility, to innovation and ultimately business risk. >> Yeah, so we've seen obviously the cloud has addressed a lot of these problems but the problem is the cloud is in the cloud and much of my stuff, most of my stuff, isn't in the cloud. So I have all these other workloads that are either on-Prem and now you've got this emerging Edge. And so I wonder if we could just talk a little vision here for a minute. I mean what I've been envisioning is this abstraction layer that cuts across all weather. It doesn't really matter where it is. If it's on-Prem, if it's across cloud, if it's in the cloud, on the edge, we could talk about what that all means. But if customers that I talked to they're sort of done with the complexity of that underlying infrastructure. They want technology to take care of that. They want automation they want AI brought in to that equation. And it seems like we're from the cusp of the decade where that might happen. What's your take? >> Well, yeah, certainly I mentioned that data cloud and AI are really the disruptive forces, better propelling. The digital transformation for customers. Cloud has set the standard for agility and AI driven insights and intelligence are really helping to make the underlying infrastructure invisible and customers are looking for this notion of being able to get that cloud operational agility pretty much everywhere because they're discovering that that's a game changer. And yet a lot of their application workloads and data is on-Prem and is increasingly growing at the edge. So they want same experience to be able to truly bring that agility to wherever their data in absolute. And that's one of the things that we're continuing to hear from customers. >> And this problem is just going to get worse. I mean for decades we marched to the cadence of Moore's Law and everybody's going to forgets about Moore's Law. And say, "Ah, it's dying or whatever." But actually when you look at the processing power that's coming out now, it's more than doubling every two years, quadrupling every two years. So now you've got this capability in your hands and application design minors, storage companies, networking companies. They're going to have all this power to now bring in AI and do things that we've never even imagined before. So it's not about the box and the speeds and feeds of the box. It's really more about this abstraction layer that I was talking about. The management if you will that you were discussing and what we can do in terms of being able to power new workloads in machine intelligence, it's this kind of ubiquitous, call it the cloud but it's expanding pretty much everywhere in every part of our lives even to the edge you think about autonomous vehicles, you think about factories it's actually quite mind boggling where we're headed. >> It is and you touched upon AI. And certainly when you look at infrastructure, for example there's been a ton of complexity in infrastructure management. One of the studies that was done actually by IDC indicated that over 90% of the challenges that arise, for example ultimately down at the storage infrastructure layer that's powering the apps ultimately arises from way above the stack all the way from the server layer on down where even the virtual machine layer. And there, for example, AIOps for infrastructure has become a game changer for customers to be able to bring the power of AI and machine learning and multi-variate analysis to be able to predict and prevent issues. Dave, you also touched upon Edge and across the board. What we're seeing is the Enterprise Edge is becoming that frontier for customer experiences and the opportunity to reimagine customer experiences as well as just the frontier for commerce that's happening. When you look at retail and manufacturing and or financial services. So across the board with the data growth that's happening and this Edge becoming the strategic frontier for delivering the customer experiences how you power your application workloads there and how you deliver that data and protect that data and be able to seamlessly manage that overall infrastructure. As you mentioned abstracted away at a higher level becomes incredibly important for customers. >> So interesting to hear how the conversations changed. I'd like to say, I go back to whatever it was five years ago, we're talking about flash storage class memory, NVMe and those things are still there but your emphasis now you're talking about machine learning, AI, math around deep learning. It's really software is really what you're focusing on these days. >> Very much so. Certainly this notion of software and services that are delivering and unlocking a whole new experience for customers that's really the game changer going forward for customers. And that's what we're focused on. >> Well, I said we had a little surprise for you. So you guys are having an event on May 4th. It's called Unleash The Power of Data. What's that event all about Sandeep? >> Yeah. We are very much excited about our May 4th event. As you mentioned, it's called Unleash The Power of Data. And as most organizations today are data driven and data is at the heart of what they're doing. We're excited to invite everyone to join this event. And through this event we're unveiling a new vision for data that accelerates the data driven transformation from Edge to cloud. This event promises to be a pivotal event and one that IT admins, cloud architects, virtual machine admins, vice presidents, directors of IT and CIO really won't want to mess. Across the board this event is just bringing a new way of articulating the overall problem statement and in market in focused the articulation of the trends that we were just discussing. It's an event that's going to be hosted by a Business and Technology Journalist, Shabani Joshi. It will feature a market in panel with a focus on the crucial role that data is playing in customers digital transformation. It will also include and feature Antonio Neary, CEO of HPE and Tom black, senior vice president and general manager of HPE Storage Business and industry experts including Julia Palmer, research vice president at Gartner. We will unveil game changing HPE innovations that will make it possible for organizations across Edge to cloud to unleash the power of data. >> Sounds like great event. I presume I can go to hpe.com and get information, is it a registered event? How does that all work? Yeah, we invite everyone to visit hpe.com and by visiting there you can click and save the date of May 4th at 8:00 AM Pacific. We invite everyone to join us. We couldn't be more excited to get to this event and be able to share the vision and game-changing HPE innovations. >> Awesome. So I don't have to register, right? I don't have to give up my three children's name and my social security number to attend your event. Is that right? >> No registration required, come by, click on hpe.com. Save the date on your calendar. And we very much look forward to having everyone join us for this event. >> I love it, it's pure content event. I'm not going to get a phone call afterwards saying, "Hey, buy some stuff from me." That could come other channels but so that's good. Thank you for that. Thanks for providing that service to the industry. I'm excited to see what you guys are going to be announcing that day and look Sandeep. I mean, like I said, we've known each other a while. We've seen a lot of trends but the next 10 years it ain't going to look like the last 10 is it? >> It's going to be very different and we couldn't be more excited. >> Well, Sandeep, thanks so much for coming to theCube and riffing with me on the industry and giving us a preview for your event. Good luck with that. And always great to see you. >> Thanks a lot, Dave. Always great to see you as well. >> All right. And thank you everybody. This is Dave Volante for theCube and we'll see you next time. (upbeat music)

Published Date : Apr 20 2021

SUMMARY :

Sandeep, great to see you man. What are the key trends you're and the infrastructure sprawl. and at the time and to be able to deliver on But if customers that I talked to and AI are really the disruptive and everybody's going to and the opportunity to So interesting to hear how and services that are So you guys are having and data is at the heart and save the date of May I don't have to give up Save the date on your calendar. I'm excited to see what It's going to be very different And always great to see you. Always great to see you as well. And thank you everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Julia PalmerPERSON

0.99+

DavePERSON

0.99+

Antonio NearyPERSON

0.99+

Dave VolantePERSON

0.99+

SandeepPERSON

0.99+

Tom blackPERSON

0.99+

Sandeep SinghPERSON

0.99+

Shabani JoshiPERSON

0.99+

HPEORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

second elementQUANTITY

0.99+

May 4thDATE

0.99+

Unleash The Power of DataEVENT

0.99+

IDCORGANIZATION

0.99+

three childrenQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

one elementQUANTITY

0.99+

OneQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

five years agoDATE

0.98+

over 90%QUANTITY

0.97+

every two yearsQUANTITY

0.95+

Moore's LawTITLE

0.9+

FirstQUANTITY

0.87+

May 4th at 8:00 AM PacificDATE

0.85+

next 10 yearsDATE

0.84+

hpe.comOTHER

0.78+

hpe.comORGANIZATION

0.77+

AndoverORGANIZATION

0.74+

CEOPERSON

0.71+

decadesQUANTITY

0.65+

EdgeTITLE

0.65+

theCubeORGANIZATION

0.61+

EdgeORGANIZATION

0.6+

Storage MarketingORGANIZATION

0.59+

10QUANTITY

0.54+

couple of decadesQUANTITY

0.51+

doublingQUANTITY

0.48+

thingsQUANTITY

0.47+

Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 10 2021

SUMMARY :

This is breaking analysis and did a lot of the charts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FoyerPERSON

0.99+

David MoschellaPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Patrick MoorheadPERSON

0.99+

Tom SiebelPERSON

0.99+

New EnglandLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CNETORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MIT Technology ReviewORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10%QUANTITY

0.99+

five minutesQUANTITY

0.99+

TeslaORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

OracleORGANIZATION

0.99+

BostonLOCATION

0.99+

95%QUANTITY

0.99+

40%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AdobeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

last weekDATE

0.99+

99%QUANTITY

0.99+

ETRORGANIZATION

0.99+

dave.vellante@siliconangle.comOTHER

0.99+

John FurrierPERSON

0.99+

EE TimesORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

10XQUANTITY

0.99+

last weekDATE

0.99+

MoschellaPERSON

0.99+

theCUBEORGANIZATION

0.98+

IntelORGANIZATION

0.98+

15 nanometerQUANTITY

0.98+

2015DATE

0.98+

todayDATE

0.98+

Seeing DigitalTITLE

0.98+

30%QUANTITY

0.98+

HPEORGANIZATION

0.98+

this weekDATE

0.98+

A14COMMERCIAL_ITEM

0.98+

higher than 118%QUANTITY

0.98+

5%QUANTITY

0.97+

10 years agoDATE

0.97+

EinORGANIZATION

0.97+

a buckQUANTITY

0.97+

64 bitQUANTITY

0.97+

C3.aiTITLE

0.97+

DatabricksORGANIZATION

0.97+

about 40%QUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

DataikuORGANIZATION

0.95+

siliconangle.comOTHER

0.94+

JG Chirapurath, Microsoft | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. Okay, >>we're now going to explore the vision of the future of cloud computing From the perspective of one of the leaders in the field, J G >>Share >>a pure off is the vice president of As Your Data ai and Edge at Microsoft G. Welcome to the Cuban cloud. Thanks so much for participating. >>Well, thank you, Dave, and it's a real pleasure to be here with you. And I just wanna welcome the audience as well. >>Well, jg judging from your title, we have a lot of ground to cover, and our audience is definitely interested in all the topics that are implied there. So let's get right into it. You know, we've said many times in the Cube that the new innovation cocktail comprises machine intelligence or a I applied to troves of data. With the scale of the cloud. It's it's no longer, you know, we're driven by Moore's law. It's really those three factors, and those ingredients are gonna power the next wave of value creation and the economy. So, first, do you buy into that premise? >>Yes, absolutely. we do buy into it. And I think, you know, one of the reasons why we put Data Analytics and Ai together is because all of that really begins with the collection of data and managing it and governing it, unlocking analytics in it. And we tend to see things like AI, the value creation that comes from a I as being on that continues off, having started off with really things like analytics and proceeding toe. You know, machine learning and the use of data. Interesting breaks. Yes. >>I'd like to get some more thoughts around a data and how you see the future data and the role of cloud and maybe how >>Microsoft, you >>know, strategy fits in there. I mean, you, your portfolio, you got you got sequel server, Azure, Azure sequel. You got arc, which is kinda azure everywhere for people that aren't familiar with that. You've got synapse. Which course that's all the integration a data warehouse, and get things ready for B I and consumption by the business and and the whole data pipeline and a lot of other services as your data bricks you got You got cosmos in their, uh, Blockchain. You've got open source services like Post Dress and my sequel. So lots of choices there. And I'm wondering, you know, how do you think about the future of Of of Cloud data platforms? It looks like your strategies, right tool for the right job? Is that fair? >>It is fair, but it's also just to step back and look at it. It's fundamentally what we see in this market today is that customer was the Sikh really a comprehensive proposition? And when I say a comprehensive proposition, it is sometimes not just about saying that. Hey, listen way No, you're a sequel server company. We absolutely trust that you have the best Azure sequel database in the cloud, but tell us more. We've got data that's sitting in her group systems. We've got data that's sitting in Post Press in things like mongo DB, right? So that open source proposition today and data and data management and database management has become front and center, so are really sort of push. There is when it comes to migration management, modernization of data to present the broadest possible choice to our customers so we can meet them where they are. However, when it comes to analytics. One of the things they asked for is give us a lot more convergence use. You know it, really, it isn't about having 50 different services. It's really about having that one comprehensive service that is converged. That's where things like synapse Fitzer, where in just land any kind of data in the leg and then use any compute engine on top of it to drive insights from it. So, fundamentally, you know, it is that flexibility that we really sort of focus on to meet our customers where they are and really not pushing our dogma and our beliefs on it. But to meet our customers according to the way they have deployed stuff like this. >>So that's great. I want to stick on this for a minute because, you know, I know when when I have guests on like yourself, do you never want to talk about you know, the competition? But that's all we ever talk about. That's all your customers ever talk about, because because the counter to that right tool for the right job and that I would say, is really kind of Amazon's approach is is that you got the single unified data platform, the mega database that does it all. And that's kind of Oracle's approach. It sounds like you wanna have your cake and eat it, too, so you you got the right tool for the right job approach. But you've got an integration layer that allows you to have that converge database. I wonder if you could add color to that and you confirm or deny what I just said. >>No, that's a That's a very fair observation, but I I say there's a nuance in what I sort of describe when it comes to data management. When it comes to APS, we have them customers with the broadest choice. Even in that, even in that perspective, we also offer convergence. So, case in point, when you think about Cosmos TV under that one sort of service, you get multiple engines, but with the same properties, right global distribution, the five nines availability. It gives customers the ability to basically choose when they have to build that new cloud native AB toe, adopt cosmos Davey and adopted in a way that it's and choose an engine that is most flexible. Tow them, however you know when it comes to say, you know, writing a sequel server, for example from organizing it you know you want. Sometimes you just want to lift and shift it into things. Like I asked In other cases, you want to completely rewrite it, so you need to have the flexibility of choice there that is presented by a legacy off What's its on premises? When it moved into things like analytics, we absolutely believe in convergence, right? So we don't believe that look, you need to have a relation of data warehouse that is separate from a loop system that is separate from, say, a B I system. That is just, you know, it's a bolt on for us. We love the proposition off, really building things that are so integrated that once you land data, once you prep it inside the lake, you can use it for analytics. You can use it for being. You can use it for machine learning. So I think you know, are sort of differentiated. Approach speaks for itself there. Well, >>that's that's interesting, because essentially, again, you're not saying it's an either or, and you're seeing a lot of that in the marketplace. You got some companies say no, it's the Data Lake and others saying No, no put in the data warehouse and that causes confusion and complexity around the data pipeline and a lot of calls. And I'd love to get your thoughts on this. Ah, lot of customers struggled to get value out of data and and specifically data product builders of frustrated that it takes too long to go from. You know, this idea of Hey, I have an idea for a data service and it could drive monetization, but to get there, you gotta go through this complex data lifecycle on pipeline and beg people to add new data sources. And do you do you feel like we have to rethink the way that we approach data architectures? >>Look, I think we do in the cloud, and I think what's happening today and I think the place where I see the most amount of rethink the most amount of push from our customers to really rethink is the area of analytics in a I. It's almost as if what worked in the past will not work going forward. Right? So when you think about analytics on in the Enterprise today, you have relational systems, you have produced systems. You've got data marts. You've got data warehouses. You've got enterprise data warehouses. You know, those large honking databases that you use, uh, to close your books with right? But when you start to modernize it, what deep you are saying is that we don't want to simply take all of that complexity that we've built over say, you know, 34 decades and simply migrated on mass exactly as they are into the cloud. What they really want is a completely different way of looking at things. And I think this is where services like synapse completely provide a differentiated proposition to our customers. What we say there is land the data in any way you see shape or form inside the lake. Once you landed inside the lake, you can essentially use a synapse studio toe. Prep it in the way that you like, use any compute engine of your choice and and operate on this data in any way that you see fit. So, case in point, if you want to hydrate relation all data warehouse, you can do so if you want to do ad hoc analytics using something like spark. You can do so if you want to invoke power. Bi I on that data or b i on that data you can do so if you want to bring in a machine learning model on this breath data you can do so, so inherently. So when customers buy into this proposition, what it solves for them and what it gives them is complete simplicity, right? One way to land the data, multiple ways to use it. And it's all eso. >>Should we think of synapse as an abstraction layer that abstracts away the complexity of the underlying technology? Is that a fair way toe? Think about it. >>Yeah, you can think of it that way. It abstracts away, Dave a couple of things. It takes away the type of data, you know, sort of the complexities related to the type of data. It takes away the complexity related to the size of data. It takes away the complexity related to creating pipelines around all these different types of data and fundamentally puts it in a place where it can be now consumed by any sort of entity inside the actual proposition. And by that token, even data breaks. You know, you can, in fact, use data breaks in in sort off an integrated way with a synapse, Right, >>Well, so that leads me to this notion of and then wonder if you buy into it s Oh, my inference is that a data warehouse or a data lake >>could >>just be a node in inside of a global data >>mesh on. >>Then it's synapses sort of managing, uh, that technology on top. Do you buy into that that global data mesh concept >>we do. And we actually do see our customers using synapse and the value proposition that it brings together in that way. Now it's not where they start. Often times when a customer comes and says, Look, I've got an enterprise data warehouse, I want to migrate it or I have a group system. I want to migrate it. But from there, the evolution is absolutely interesting to see. I give you an example. You know, one of the customers that we're very proud off his FedEx And what FedEx is doing is it's completely reimagining its's logistics system that basically the system that delivers What is it? The three million packages a day on in doing so in this covert times, with the view of basically delivering our covert vaccines. One of the ways they're doing it is basically using synapse. Synapse is essentially that analytic hub where they can get complete view into their logistic processes. Way things are moving, understand things like delays and really put all that together in a way that they can essentially get our packages and these vaccines delivered as quickly as possible. Another example, you know, is one of my favorite, uh, we see once customers buy into it, they essentially can do other things with it. So an example of this is, uh is really my favorite story is Peace Parks Initiative. It is the premier Air White Rhino Conservancy in the world. They essentially are using data that has landed in azure images in particular. So, basically, you know, use drones over the vast area that they patrol and use machine learning on this data to really figure out where is an issue and where there isn't an issue so that this part with about 200 rangers can scramble surgically versus having to read range across the last area that they cover. So What do you see here is you know, the importance is really getting your data in order. Landed consistently. Whatever the kind of data ideas build the right pipelines and then the possibilities of transformation are just endless. >>Yeah, that's very nice how you worked in some of the customer examples. I appreciate that. I wanna ask you, though, that that some people might say that putting in that layer while it clearly adds simplification and e think a great thing that they're begins over time to be be a gap, if you will, between the ability of that layer to integrate all the primitives and all the peace parts on that, that you lose some of that fine grain control and it slows you down. What would you say to that? >>Look, I think that's what we excel at, and that's what we completely sort of buy into on. It's our job to basically provide that level off integration that granularity in the way that so it's an art, absolutely admit it's an art. There are areas where people create simplicity and not a lot of you know, sort of knobs and dials and things like that. But there are areas where customers want flexibility, right? So I think just to give you an example of both of them in landing the data inconsistency in building pipelines, they want simplicity. They don't want complexity. They don't want 50 different places to do this. Just 100 to do it. When it comes to computing and reducing this data analyzing this data, they want flexibility. This is one of the reasons why we say, Hey, listen, you want to use data breaks? If you're you're buying into that proposition and you're absolutely happy with them, you can plug plug it into it. You want to use B I and no, essentially do a small data mart. You can use B I If you say that. Look, I've landed in the lake. I really only want to use em melt, bringing your animal models and party on. So that's where the flexibility comes in. So that's sort of really sort of think about it. Well, >>I like the strategy because, you know, my one of our guest, Jim Octagon, e E. I think one of the foremost thinkers on this notion of off the data mesh and her premises that that that data builders, data product and service builders air frustrated because the the big data system is generic to context. There's no context in there. But by having context in the big data architecture and system, you could get products to market much, much, much faster. So but that seems to be your philosophy. But I'm gonna jump ahead to do my ecosystem question. You've mentioned data breaks a couple of times. There's another partner that you have, which is snowflake. They're kind of trying to build out their own, uh, data cloud, if you will, on global mesh in and the one hand, their partner. On the other hand, there are competitors. How do you sort of balance and square that circle? >>Look, when I see snowflake, I actually see a partner. You know that when we essentially you know, we are. When you think about as you know, this is where I sort of step back and look at Azure as a whole and in azure as a whole. Companies like snowflakes are vital in our ecosystem, right? I mean, there are places we compete, but you know, effectively by helping them build the best snowflake service on Asia. We essentially are able toe, you know, differentiate and offer a differentiated value proposition compared to, say, a Google or on AWS. In fact, that's being our approach with data breaks as well, where you know they are effectively on multiple club, and our opportunity with data breaks is toe essentially integrate them in a way where we offer the best experience. The best integrations on Azure Barna That's always been a focus. >>That's hard to argue with. Strategy. Our data with our data partner eat er, shows Microsoft is both pervasive and impressively having a lot of momentum spending velocity within the budget cycles. I wanna come back thio ai a little bit. It's obviously one of the fastest growing areas in our in our survey data. As I said, clearly, Microsoft is a leader in this space. What's your what's your vision of the future of machine intelligence and how Microsoft will will participate in that opportunity? >>Yeah, so fundamentally, you know, we've built on decades of research around, you know, around, you know, essentially, you know, vision, speech and language that's being the three core building blocks and for the for a really focused period of time we focused on essentially ensuring human parody. So if you ever wondered what the keys to the kingdom are it, czar, it's the most we built in ensuring that the research posture that we've taken there, what we then done is essentially a couple of things we focused on, essentially looking at the spectrum. That is a I both from saying that, Hollis and you know it's gotta work for data. Analysts were looking toe basically use machine learning techniques, toe developers who are essentially, you know, coding and building a machine learning models from scratch. So for that select proposition manifesto us, as you know, really a. I focused on all skill levels. The other court thing we've done is that we've also said, Look, it will only work as long as people trust their data and they can trust their AI models. So there's a tremendous body of work and research we do in things like responsibility. So if you ask me where we sort of push on is fundamentally to make sure that we never lose sight of the fact that the spectrum off a I, and you can sort of come together for any skill level, and we keep that responsibly. I proposition. Absolutely strong now against that canvas, Dave. I'll also tell you that you know, as edge devices get way more capable, right where they can input on the edge, see a camera or a mike or something like that, you will see us pushing a lot more of that capability onto the edge as well. But to me, that's sort of a modality. But the core really is all skill levels and that responsible denia. >>Yeah, So that that brings me to this notion of wanna bring an edge and and hybrid cloud Understand how you're thinking about hybrid cloud multi cloud. Obviously one of your competitors, Amazon won't even say the word multi cloud you guys have, Ah, you know, different approach there. But what's the strategy with regard? Toe, toe hybrid. You know, Do you see the cloud you bringing azure to the edge? Maybe you could talk about that and talk about how you're different from the competition. >>Yeah, I think in the edge from Annette, you know, I live in I'll be the first one to say that the word nge itself is conflated. Okay, It's, uh but I will tell you, just focusing on hybrid. This is one of the places where you know I would say the 2020 if I would have looked back from a corporate perspective. In particular, it has Bean the most informative because we absolutely saw customers digitizing moving to the cloud. And we really saw hybrid in action. 2020 was the year that hybrid sort of really became really from a cloud computing perspective and an example of this is we understood it's not all or nothing. So sometimes customers want azure consistency in their data centers. This is where things like Azure stack comes in. Sometimes they basically come to us and say, We want the flexibility of adopting flexible pattern, you know, platforms like, say, containers orchestra, Cuban Pettis, so that we can essentially deployed wherever you want. And so when we design things like art, it was built for that flexibility in mind. So here is the beauty of what's something like our can do for you. If you have a kubernetes endpoint anywhere we can deploy and as your service onto it, that is the promise, which means if for some reason, the customer says that. Hey, I've got this kubernetes endpoint in AWS and I love as your sequel. You will be able to run as your sequel inside AWS. There's nothing that stops you from doing it so inherently you remember. Our first principle is always to meet our customers where they are. So from that perspective, multi cloud is here to stay. You know, we're never going to be the people that says, I'm sorry, we will never see a But it is a reality for our customers. >>So I wonder if we could close. Thank you for that by looking, looking back and then and then ahead. And I wanna e wanna put forth. Maybe it's, Ah criticism, but maybe not. Maybe it's an art of Microsoft, but But first you know, you get Microsoft an incredible job of transitioning. It's business as your nominee president Azzawi said. Our data shows that so two part question First, Microsoft got there by investing in the cloud, really changing its mind set, I think, in leveraging its huge software state and customer base to put Azure at the center of its strategy, and many have said me included that you got there by creating products that air Good enough. You know, we do a 1.0, it's not that great. And the two Dato, and maybe not the best, but acceptable for your customers. And that's allowed you to grow very rapidly expanding market. >>How >>do you respond to that? Is that is that a fair comment? Ume or than good enough? I wonder if you could share your >>thoughts, gave you? You hurt my feelings with that question. I don't hate me, g getting >>it out there. >>So there was. First of all, thank you for asking me that. You know, I am absolutely the biggest cheerleader. You'll find a Microsoft. I absolutely believe you know that I represent the work off almost 9000 engineers and we wake up every day worrying about our customer and worrying about the customer condition and toe. Absolutely. Make sure we deliver the best in the first time that we do. So when you take the platter off products we've delivered in nausea, be it as your sequel, be it as your cosmos TV synapse as your data breaks, which we did in partnership with data breaks, a za machine learning and recently when we prevail, we sort off, you know, sort of offered the world's first comprehensive data government solution in azure purview. I would humbly submit to you that we're leading the way and we're essentially showing how the future off data ai and the actual work in the cloud. >>I'd be disappointed if you if you had If you didn't, if you capitulated in any way J g So so thank you for that. And the kind of last question is, is looking forward and how you're thinking about the future of cloud last decade. A lot about your cloud migration simplifying infrastructure management, deployment SAS if eyeing my enterprise, lot of simplification and cost savings. And, of course, the redeployment of resource is toward digital transformation. Other other other valuable activities. How >>do >>you think this coming decade will will be defined? Will it be sort of more of the same? Or is there Is there something else out there? >>I think I think that the coming decade will be one where customers start one law outside value out of this. You know what happened in the last decade when people leave the foundation and people essentially looked at the world and said, Look, we've got to make the move, you know, the largely hybrid, but we're going to start making steps to basically digitize and modernize our platforms. I would tell you that with the amount of data that people are moving to the cloud just as an example, you're going to see use of analytics ai for business outcomes explode. You're also going to see a huge sort of focus on things like governance. You know, people need to know where the data is, what the data catalog continues, how to govern it, how to trust this data and given all other privacy and compliance regulations out there. Essentially, they're complying this posture. So I think the unlocking of outcomes versus simply Hey, I've saved money Second, really putting this comprehensive sort off, you know, governance, regime in place. And then, finally, security and trust. It's going to be more paramount than ever before. Yeah, >>nobody's gonna use the data if they don't trust it. I'm glad you brought up your security. It's It's a topic that hits number one on the CEO list. J G. Great conversation. Obviously the strategy is working, and thanks so much for participating in Cuba on cloud. >>Thank you. Thank you, David. I appreciate it and thank you to. Everybody was tuning in today. >>All right? And keep it right there. I'll be back with our next guest right after this short break.

Published Date : Jan 22 2021

SUMMARY :

cloud brought to you by silicon angle. a pure off is the vice president of As Your Data ai and Edge at Microsoft And I just wanna welcome the audience as you know, we're driven by Moore's law. And I think, you know, one of the reasons why And I'm wondering, you know, how do you think about the future of Of So, fundamentally, you know, it is that flexibility that we really sort of focus I want to stick on this for a minute because, you know, I know when when I have guests So I think you know, are sort of differentiated. but to get there, you gotta go through this complex data lifecycle on pipeline and beg people to in the Enterprise today, you have relational systems, you have produced systems. Is that a fair way toe? It takes away the type of data, you know, sort of the complexities related Do you buy into that that global data mesh concept is you know, the importance is really getting your data in order. that you lose some of that fine grain control and it slows you down. So I think just to give you an example of both I like the strategy because, you know, my one of our guest, Jim Octagon, I mean, there are places we compete, but you know, effectively by helping them build It's obviously one of the fastest growing areas in our So for that select proposition manifesto us, as you know, really a. You know, Do you see the cloud you bringing azure to the edge? Cuban Pettis, so that we can essentially deployed wherever you want. Maybe it's an art of Microsoft, but But first you know, you get Microsoft You hurt my feelings with that question. when we prevail, we sort off, you know, sort of offered the world's I'd be disappointed if you if you had If you didn't, if you capitulated in any way J g So Look, we've got to make the move, you know, the largely hybrid, I'm glad you brought up your security. I appreciate it and thank you to. And keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AnnettePERSON

0.99+

HollisPERSON

0.99+

FedExORGANIZATION

0.99+

JG ChirapurathPERSON

0.99+

AsiaLOCATION

0.99+

Jim OctagonPERSON

0.99+

AWSORGANIZATION

0.99+

100QUANTITY

0.99+

OracleORGANIZATION

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

50 different servicesQUANTITY

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

OneQUANTITY

0.99+

AzzawiPERSON

0.99+

FirstQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

34 decadesQUANTITY

0.99+

CubaLOCATION

0.99+

singleQUANTITY

0.99+

J G.PERSON

0.99+

first timeQUANTITY

0.98+

SecondQUANTITY

0.98+

first oneQUANTITY

0.98+

first principleQUANTITY

0.98+

last decadeDATE

0.98+

Cosmos TVORGANIZATION

0.98+

SikhORGANIZATION

0.98+

about 200 rangersQUANTITY

0.97+

J GPERSON

0.96+

three factorsQUANTITY

0.96+

two partQUANTITY

0.96+

50 differentQUANTITY

0.96+

AzureTITLE

0.96+

decadesQUANTITY

0.96+

presidentPERSON

0.96+

Air White Rhino ConservancyORGANIZATION

0.95+

CubanOTHER

0.94+

almost 9000 engineersQUANTITY

0.91+

Post PressORGANIZATION

0.89+

As Your Data ai and EdgeORGANIZATION

0.88+

MoorePERSON

0.88+

cosmos DaveyORGANIZATION

0.87+

Peace Parks InitiativeORGANIZATION

0.86+

three million packages a dayQUANTITY

0.85+

DressTITLE

0.85+

waveEVENT

0.84+

theCUBEORGANIZATION

0.83+

synapseORGANIZATION

0.8+

CubeCOMMERCIAL_ITEM

0.79+

three core building blocksQUANTITY

0.78+

one comprehensive serviceQUANTITY

0.77+

Data LakeORGANIZATION

0.77+

Pradeep Sindhu, Fungible | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. As I've said many times on the Cube for years, decades, even we've marched to the cadence of Moore's law, relying on the doubling of performance every 18 months or so. But no longer is this the mainspring of innovation for technology. Rather, it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build out of a massively distributed computer network. Very importantly, in the last several years, alternative processors have emerged to support offloading work and performing specific Test GP use of the most widely known example of this trend, with the ascendancy of in video for certain applications like gaming and crypto mining and, more recently, machine learning. But in the middle of last decade, we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years. As we move into the next era of Cloud. And with me is deep. Sindhu, who's this co founder and CEO of Fungible, a company specializing in the design and development of GPU deep Welcome to the Cube. Great to see you. >>Thank you, Dave. And thank you for having me. >>You're very welcome. So okay, my first question is, don't CPUs and GP use process data already? Why do we need a DPU? >>Um you know that that is a natural question to ask on. CPUs have been around in one form or another for almost, you know, 55 maybe 60 years. And, uh, you know, this is when general purpose computing was invented, and essentially all CPI use went to x 80 60 x 86 architecture. Uh, by and large arm, of course, is used very heavily in mobile computing, but x 86 primarily used in data center, which is our focus. Um, now, you can understand that that architectural off general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time, uh, improvements you refer the Moore's Law, which is really the improvements off the price performance off silicon over time. Um, that, combined with architectural improvements, was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. Uh, you're not going to get very much. You're not going to squeeze more blood out of that storm from the general purpose computer architectures. What has also happened over the last decade is that Moore's law, which is essentially the doubling off the number of transistors, um, on a chip has slowed down considerably on and to the point where you're only getting maybe 10 20% improvements every generation in speed off the grandest er. If that. And what's happening also is that the spacing between successive generations of technology is actually increasing from 2, 2.5 years to now three, maybe even four years. And this is because we are reaching some physical limits in Seamus. Thes limits are well recognized, and we have to understand that these limits apply not just to general purpose if use, but they also apply to GP use now. General purpose, if used, do one kind of confrontation. They really general on bacon do lots and lots of different things. It is actually a very, very powerful engine, Um, and then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of processor called the GPU, which specializes in executing vector floating point arithmetic operations much, much better than CPL. Maybe 2030 40 times better. Well, GPS have now been around for probably 15, 20 years, mostly addressing graphics computations. But recently, in the last decade or so, they have been used heavily for AI and analytics computations. So now the question is, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago, and I recognize I was still at Juniper Networks, which is another company that I found it. I recognize that in the data center, um, as the workload changes due to addressing Mawr and Mawr, larger and larger corpus is of data number one. And as people use scale out as the standard technique for building applications, what happens is that the amount of East West traffic increases greatly. And what happens is that you now have a new type off workload which is coming, and today probably 30% off the workload in a data center is what we call data centric. I want to give you some examples of what is the data centric E? >>Well, I wonder if I could interrupt you for a second, because Because I want you to. I want those examples, and I want you to tie it into the cloud because that's kind of the topic that we're talking about today and how you see that evolving. It's a key question that we're trying to answer in this program. Of course, Early Cloud was about infrastructure, a little compute storage, networking. And now we have to get to your point all this data in the cloud and we're seeing, by the way, the definition of cloud expand into this distributed or I think the term you use is disaggregated network of computers. So you're a technology visionary, And I wonder, you know how you see that evolving and then please work in your examples of that critical workload that data centric workload >>absolutely happy to do that. So, you know, if you look at the architectural off cloud data centers, um, the single most important invention was scale out scale out off identical or near identical servers, all connected to a standard i p Internet network. That's that's the architectural. Now, the building blocks of this architecture er is, uh, Internet switches, which make up the network i p Internet switches. And then the servers all built using general purpose X 86 CPUs with D ram with SSD with hard drives all connected, uh, inside the CPU. Now, the fact that you scale these, uh, server nodes as they're called out, um, was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose computer? But this architectures, Dave, is it compute centric architectures and the reason it's a compute centric architectures. If you open this a server node, what you see is a connection to the network, typically with a simple network interface card. And then you have CP use, which are in the middle of the action. Not only are the CPUs processing the application workload, but they're processing all of the aisle workload, what we call data centric workload. And so when you connect SSD and hard drives and GPU that everything to the CPU, um, as well as to the network, you can now imagine that the CPUs is doing to functions it z running the applications, but it's also playing traffic cop for the I O. So every Io has to go to the CPU and you're executing instructions typically in the operating system, and you're interrupting the CPU many, many millions of times a second now. General Purpose CPUs and the architecture of the CPS was never designed to play traffic cop, because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's. It's critical that in this new architecture, where there's a lot of data, a lot of East West traffic, the percentage of work clothes, which is data centric, has gone from maybe 1 to 2% to 30 to 40%. I'll give you some numbers, which are absolutely stunning if you go back to, say, 1987 and which is, which is the year in which I bought my first personal computer. Um, the network was some 30 times slower. Then the CPI. The CPI was running at 50 megahertz. The network was running at three megabits per second. Well, today the network runs at 100 gigabits per second and the CPU clock speed off. A single core is about 3 to 2.3 gigahertz. So you've seen that there is a 600 x change in the ratio off I'll to compute just the raw clock speed. Now you can tell me that. Hey, um, typical CPUs have lots of lots, of course, but even when you factor that in, there's bean close toe two orders of magnitude change in the amount of ill to compute. There is no way toe address that without changing the architectures on this is where the DPU comes in on the DPU actually solves two fundamental problems in cloud data centers on these air. Fundamental. There's no escaping it, no amount off. Clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architectures the interactions between server notes are very inefficient. Okay, that's number one problem number one. Problem number two is that these data center computations and I'll give you those four examples the network stack, the storage stack, the virtualization stack and the security stack. Those four examples are executed very inefficiently by CBS. Needless to say that that if you try to execute these on GPS, you'll run into the same problem, probably even worse because GPS are not good at executing these data centric computations. So when U. S o What we were looking to do it fungible is to solve these two basic problems and you don't solve them by by just using taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the for the last 40 years. So what we did was we created this new microprocessor that we call the DPU from ground doctor is a clean sheet design and it solve those two problems. Fundamental. >>So I want to get into that. But I just want to stop you for a second and just ask you a basic question, which is so if I understand it correctly, if I just took the traditional scale out, If I scale out compute and storage, you're saying I'm gonna hit a diminishing returns, It z Not only is it not going to scale linear linearly, I'm gonna get inefficiencies. And that's really the problem that you're solving. Is that correct? >>That is correct. And you know this problem uh, the workloads that we have today are very data heavy. You take a I, for example, you take analytics, for example. It's well known that for a I training, the larger the corpus of data relevant data that you're training on, the better the result. So you can imagine where this is going to go, especially when people have figured out a formula that, hey, the more data I collect, I can use those insights to make money. >>Yeah, this is why this is why I wanted to talk to you, because the last 10 years we've been collecting all this data. Now I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research and the first thing people said is they want to improve their infrastructure on. They want to do that by moving to the cloud, and they also there was a security angle there as well. That's a whole nother topic. We could discuss the other staff that jumped out at me. There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs with alternative processing technology. So that's sort of, you know, I know it's self serving, but z right on the conversation we're having. So I >>want to >>understand the architecture. Er, aan den, how you've approached this, You've you've said you've clearly laid out the X 86 is not going to solve this problem. And even GP use are not going to solve this problem. So help us understand the architecture and how you do solve this problem. >>I'll be I'll be very happy to remember I use this term traffic cough. Andi, I use this term very specifically because, uh, first let me define what I mean by a data centric computation because that's the essence off the problem resolved. Remember, I said two problems. One is we execute data centric work clothes, at least in order of magnitude, more efficiently than CPUs or GPS, probably 30 times more efficiently on. The second thing is that we allow notes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first, let's look at the data centric piece, the data centric piece, um, for for workload to qualify as being data centric. Four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads, so I'm not saying anything new. Secondly, uh, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently. Thousands of them. Yeah, that's number two. So a lot of multiplexing number three is that this workload is state fel. In other words, you have to you can't process back. It's out of order. You have to do them in order because you're terminating network sessions on the last one Is that when you look at the actual computation, the ratio off I Oto arithmetic is medium to high. When you put all four of them together, you actually have a data centric workout, right? And this workload is terrible for general purpose, C p s not only the general purpose, C p is not executed properly. The application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them, you're going to be in trouble. So what did we do? Well, what we did was our architecture consists off very, very heavily multi threaded, general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some some of those accelerators, um, de Emma accelerators, then radio coding accelerators, compression accelerators, crypto accelerators, um, compression accelerators, thes air, just something. And then look up accelerators. These air functions that if you do not specialized, you're not going to execute them efficiently. But you cannot just put accelerators in there. These accelerators have to be multi threaded to handle. You know, we have something like 1000 different threads inside our DPU toe address. These many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that that is very important to understand is that given the paucity off transistors, I know that we have hundreds of billions of transistors on a chip. But the problem is that those transistors are used very inefficiently today. If the architecture, the architecture of the CPU or GPU, what we have done is we've improved the efficiency of those transistors by 30 times. Yeah, so you can use >>the real estate. You can use their real estate more effectively, >>much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that, you know, we're gonna end up in the same bucket where General Focus CPS are today. We were trying to solve the specific problem off data centric computations on off improving the note to note efficiency. So let me go to Point number two, because that's equally important, because in a scale out architecture, the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that, you start to get back. It drops because there are some fundamental problems caused by congestion on the network, which are unsolved as we speak today. There only one solution, which is to use DCP well. DCP is a well known is part of the D. C. P I. P. Suite. DCP was never designed to handle the agencies and speeds inside data center. It's a wonderful protocol, but it was invented 42 year 43 years ago, now >>very reliable and tested and proven. It's got a good track record, but you're a >>very good track record, unfortunately, eats a lot off CPU cycles. So if you take the idea behind TCP and you say, Okay, what's the essence of TCP? How would you apply to the data center? That's what we've done with what we call F C P, which is a fabric control protocol which we intend toe open way. Intend to publish standards on make it open. And when you do that and you you embed F c p in hardware on top of his standard I P Internet network, you end up with the ability to run at very large scale networks where the utilization of the network is 90 to 95% not 20 to 25% on you end up with solving problems of congestion at the same time. Now, why is this important today that zall geek speak so far? But the reason this stuff is important is that it such a network allows you to disaggregate pool and then virtualized, the most important and expensive resource is in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like the Ram wants to be disaggregated in food. Well, if I put everything inside a general purpose server, the problem is that those resource is get stranded because they're they're stuck behind the CPI. Well, once you disaggregate those resources and we're saying hyper disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources >>and then you're gonna re aggregate them, right? I mean, that's >>obviously exactly and the network is the key helping. So the reason the company is called fungible is because we are able to disaggregate virtualized and then pull those resources and you can get, you know, four uh, eso scale out cos you know the large aws Google, etcetera. They have been doing this aggregation and pulling for some time, but because they've been using a compute centric architecture, er that this aggregation is not nearly as efficient as we could make on their off by about a factor of three. When you look at enterprise companies, they're off by any other factor of four. Because the utilization of enterprises typically around 8% off overall infrastructure, the utilization the cloud for A W S and G, C, P and Microsoft is closer to 35 to 40%. So there is a factor off almost, uh, 4 to 8, which you can gain by disaggregated and pulling. >>Okay, so I wanna interrupt again. So thes hyper scaler zehr smart. A lot of engineers and we've seen them. Yeah, you're right. They're using ah, lot of general purpose. But we've seen them, uh, move Make moves toward GP use and and embrace things like arm eso I know, I know you can't name names but you would think that this is with all the data that's in the cloud again Our topic today you would think the hyper scaler zehr all over this >>all the hyper scale is recognized it that the problems that we have articulated are important ones on they're trying to solve them. Uh, with the resource is that they have on all the clever people that they have. So these air recognized problems. However, please note that each of these hyper scale er's has their own legacy now they've been around for 10, 15 years, and so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some >>point. Have technical debt. You mean they >>have? I'm not going to say they have technical debt, but they have a certain way of doing things on. They are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you heard the term smart neck, and all your listeners must have heard that term. Well, a smart thing is not a deep you what a smart Nick is. It's simply taking general purpose arm cores put in the network interface on a PC interface and integrating them all in the same chip and separating them from the CPI. So this does solve the problem. It solves the problem off the data centric workload, interfering with the application work, work. Good job. But it does not address the architectural problem. How to execute data centric workloads efficiently. >>Yeah, it reminds me. It reminds me of you I I understand what you're saying. I was gonna ask you about smart. Next. It does. It's almost like a bridge or a Band Aid. It's always reminds me of >>funny >>of throwing, you know, a flash storage on Ah, a disc system that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. I don't know if it's a valid analogy, but we've seen this in computing for a long time. >>Yeah, this analogy is close because, you know. Okay, so let's let's take hyper scaler X. Okay, one name names. Um, you find that, you know, half my CPUs are twiddling their thumbs because they're executing this data centric workload. Well, what are you going to do? All your code is written in, uh, C c plus plus, um, on x 86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different Let's say we use arm simply because you know x 86 licenses are not available to people to build their own CPUs. So arm was available, so they put a bunch of encores. Let's stick a PC. I express and network interface on you. Port that quote from X 86 Tow arm. Not difficult to do, but it does yield you results on, By the way, if, for example, um, this hyper scaler X shall we call them if they're able to remove 20% of the workload from general purpose CPUs? That's worth billions of dollars. So of course you're going to do that. It requires relatively little innovation other than toe for quote from one place to another place. >>That's what that's what. But that's what I'm saying. I mean, I would think again. The hyper scale is why Why can't they just, you know, do some work and do some engineering and and then give you a call and say, Okay, we're gonna We're gonna attack these workloads together. You know, that's similar to how they brought in GP use. And you're right. It's it's worth billions of dollars. You could see when when the hyper scale is Microsoft and and Azure, uh, and and AWS both announced, I think they depreciated servers now instead of four years. It's five years, and it dropped, like a billion dollars to their bottom line. But why not just work directly with you guys. I mean, Z the logical play. >>Some of them are working with us. So it's not to say that they're not working with us. So you know, all of the hyper scale is they recognize that the technology that we're building is a fundamental that we have something really special, and moreover, it's fully programmable. So you know, the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit, which is on the boundary off a server, and the network is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architectural where the functionality is programmable? But it is also very high speed for this particular set of applications. So the analogy with GPS is nearly perfect because GP use, and particularly in video that's implemented or they invented coulda, which is a programming language for GPS on it made them easy to use mirror fully programmable without compromising performance. Well, this is what we're doing with DP use. We've invented a new architectures. We've made them very easy to program. And they're these workloads or not, Workload. The computation that I talked about, which is security virtualization storage and then network. Those four are quintessential examples off data centric, foreclosed on. They're not going away. In fact, they're becoming more and more and more important over time. >>I'm very excited for you guys, I think, and really appreciate deep we're gonna have you back because I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, crypto accelerators. I want to understand that. I know there's envy me in here. There's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now, uh, into this domain, this build out of this I like this term dis aggregated, massive disaggregated network s so hyper disaggregated. Even better. And I would say this on way. I gotta go. But what got us here the last decade is not the same is what's gonna take us through the next decade. Pretty Thanks. Thanks so much for coming on the cube. It's a great company. >>You have it It's really a pleasure to speak with you and get the message of fungible out there. >>E promise. Well, I promise we'll have you back and keep it right there. Everybody, we got more great content coming your way on the Cube on Cloud, This is David. Won't stay right there.

Published Date : Jan 22 2021

SUMMARY :

a company specializing in the design and development of GPU deep Welcome to the Cube. So okay, my first question is, don't CPUs and GP use process And for the longest time, uh, improvements you refer the Moore's Law, the definition of cloud expand into this distributed or I think the term you use is disaggregated change in the amount of ill to compute. But I just want to stop you for a second and just ask you a basic So you can imagine where this is going to go, There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs and how you do solve this problem. sessions on the last one Is that when you look at the actual computation, the real estate. centric computations on off improving the note to note efficiency. but you're a disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can and then pull those resources and you can get, you know, four uh, all the data that's in the cloud again Our topic today you would think the hyper scaler all the hyper scale is recognized it that the problems that we have articulated You mean they of course, you heard the term smart neck, and all your listeners must have heard It reminds me of you I I understand what you're saying. that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. Well, the easiest thing to do is to separate the cores that run this workload. you know, do some work and do some engineering and and then give you a call and say, And so the whole trick is how do you come up I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, You have it It's really a pleasure to speak with you and get the message of fungible Well, I promise we'll have you back and keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
20%QUANTITY

0.99+

DavePERSON

0.99+

SindhuPERSON

0.99+

90QUANTITY

0.99+

AWSORGANIZATION

0.99+

30%QUANTITY

0.99+

50 megahertzQUANTITY

0.99+

CBSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Juniper NetworksORGANIZATION

0.99+

30 timesQUANTITY

0.99+

80%QUANTITY

0.99+

1QUANTITY

0.99+

four yearsQUANTITY

0.99+

55QUANTITY

0.99+

15QUANTITY

0.99+

Pradeep SindhuPERSON

0.99+

DavidPERSON

0.99+

five yearsQUANTITY

0.99+

two problemsQUANTITY

0.99+

20QUANTITY

0.99+

600 xQUANTITY

0.99+

first questionQUANTITY

0.99+

next decadeDATE

0.99+

60 yearsQUANTITY

0.99+

firstQUANTITY

0.99+

billion dollarsQUANTITY

0.99+

todayDATE

0.99+

30QUANTITY

0.99+

two thingsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

fourQUANTITY

0.99+

40%QUANTITY

0.99+

1987DATE

0.99+

1000 different threadsQUANTITY

0.99+

FirstQUANTITY

0.98+

FungibleORGANIZATION

0.98+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

8QUANTITY

0.98+

25%QUANTITY

0.98+

Four thingsQUANTITY

0.98+

second thingQUANTITY

0.98+

10QUANTITY

0.98+

35QUANTITY

0.98+

one solutionQUANTITY

0.97+

singleQUANTITY

0.97+

around 8%QUANTITY

0.97+

third elementQUANTITY

0.97+

SecondlyQUANTITY

0.97+

95%QUANTITY

0.97+

billions of dollarsQUANTITY

0.97+

100 gigabits per secondQUANTITY

0.97+

hundreds of billions of transistorsQUANTITY

0.97+

2.3 gigahertzQUANTITY

0.97+

single coreQUANTITY

0.97+

2030DATE

0.97+

4QUANTITY

0.96+

CubanOTHER

0.96+

2%QUANTITY

0.96+

eachQUANTITY

0.95+

MoorePERSON

0.95+

last decadeDATE

0.95+

three megabits per secondQUANTITY

0.95+

10 20%QUANTITY

0.95+

42 yearDATE

0.94+

bothQUANTITY

0.94+

40 timesQUANTITY

0.93+

two fundamental problemsQUANTITY

0.92+

15 yearsQUANTITY

0.92+

Problem number twoQUANTITY

0.91+

two basic problemsQUANTITY

0.9+

43 years agoDATE

0.9+

86OTHER

0.9+

one placeQUANTITY

0.9+

one sideQUANTITY

0.89+

first personal computerQUANTITY

0.89+

Simon Crosby Dirty | Cube On Cloud


 

>> Hi, I'm Stu Miniman, and welcome back to theCUBE on Cloud talking about really important topics as to how developers, were changing how they build their applications, where they live, of course, long discussion we've had for a number of years. You know, how do things change in hybrid environments? We've been talking for years, public cloud and private cloud, and really excited for this session. We're going to talk about how edge environment and AI impact that. So happy to welcome back one of our CUBE alumni, Simon Crosby, is currently the Chief Technology Officer with Swim. He's got plenty of viewpoints on AI, the edge and knows the developer world well. Simon, welcome back. Thanks so much for joining us. >> Thank you, Stu, for having me. >> All right, so let's start for a second. Let's talk about developers. You know, it used to be, you know, for years we talked about, you know, what's the level of abstraction we get. Does it sit, you know, do I put it on bare metal? Do I virtualize it? Do I containerize it? Do I make it serverless? A lot of those things, you know that the app developer doesn't want to even think about but location matters a whole lot when we're talking about things like AI where do I have all my data that I could do my training? Where do I actually have to do the processing? And of course, edge just changes by orders of magnitude. Some of the things like latency and where data lives and everything like that. So with that as a setup, would love to get just your framework as to what you're hearing from developers and what we'll get into some of the solutions that you and your team are helping them to do their jobs. >> Well, you're absolutely right, Stu. The data onslaught is very real. Companies that I deal with are facing more and more real-time data from products from their infrastructure, from their partners whatever it happens to be and they need to make decisions rapidly. And the problem that they're facing is that traditional ways of processing that data are too slow. So perhaps the big data approach, which by now is a bit old, it's a bit long in the tooth, where you store data and then you analyze it later, is problematic. First of all, data streams are boundless. So you don't really know when to analyze, but second you can't store it all. And so the store then analyze approach has to change and Swim is trying to do something about this by adopting a process of analyze on the fly, so as data is generated, as you receive events you don't bother to store them. You analyze them, and then if you have to, you store the data, but you need to analyze as you receive data and react immediately to be able to generate reasonable insights or predictions that can drive commerce and decisions in the real world. >> Yeah absolutely. I remember back in the early days of big data, you know, real time got thrown around a little but it was usually I need to react fast enough to make sure we don't lose the customer, react to something, but it was, we gather all the data and let's move compute to the data. Today as you talk about, you know, real time streams are so important. We've been talking about observability for the last couple of years to just really understand the systems and the outputs more than looking back historically at where things were waiting for alerts. So could you give us some examples if you would, as to you know, those streams, you know, what is so important about being able to interact and leverage that data when you need it? And boy, it's great if we can use it then and not have to store it and think about it later, obviously there's some benefits there, because-- >> Well every product nowadays has a CPU, right? And so there's more and more data. And just let me give you an example, Swim processes real-time data from more than a hundred million mobile devices in real time, for a mobile operator. And what we're doing there is we're optimizing connection quality between devices and the network. Now that volume of data is more than four petabytes per day, okay. Now there is simply no way you can ever store that and analyze it later. The interesting thing about this is that if you adopt and analyze, and then if you really have to store architecture, you get to take advantage of Moore's Law. So you're running at CPU memory speeds instead of at disk speed. And so that gives you a million fold speed up, and it also means you don't have the latency problem of reaching out to, or about storage, database, or whatever. And so that reduces costs. So we can do it on about 10% of the infrastructure that they previously had for Hadoop style implementation. >> So, maybe it would help if we just explain. When we say edge people think of a lot of different things, is it, you know an IOT device sitting out at the edge? Are we talking about the Telecom edge? We've been watching AWS for years, you know, spider out their services and into various environments. So when you talk about the type of solutions you're doing and what your customers have, is it the Telecom edge? Is it the actual device edge, you know, where does processing happen and where do these you know, services that work on it live? >> So I think the right way to think about edge is where can you reasonably process the data? And it obviously makes sense to process data at the first opportunity you have, but much data is encrypted between the original device, say, and the application. And so edge as a place doesn't make as much sense as edge as an opportunity to decrypt and analyze data in the clear. So edge computing is not so much a place in my view as the first opportunity you have to process data in the clear and to make sense of it. And then edge makes sense, in terms of latency, by locating, compute, as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users, you know, quickly. So edge for me often is the cloud. >> Excellent, one of the other things I think about back from, you know, the big data days or even earlier, it was that how long it took to get from the raw data to processing that data, to be able to getting some insight, and then being able to take action. It sure sounds like we're trying to collapse that completely, is that, you know, how do we do that? You know, can we actually, you know, build the system so that we can, you know, in that real time, continuous model that you talk about, you know. Take care of it and move on. >> So one of the wonderful things, one of the wonderful things about cloud computing is that two major abstractions have really served us. And those are rest, which is static disk computing, and databases. And rest means any old server can do the job for me and then the database is just an API call away. The problem with that is that it's desperately slow. So when I say desperately slow, I mean, it's probably thrown away the last 10 years of Moore's law. Just think about it this way. Your CPU runs at gigahertz and the network runs at milliseconds. So by definition, every time you reach out to a data store you're going a million times slower than your CPU. That's terrible. It's absolutely tragic, okay. So a model which is much more effective is to have an in-memory computer architecture in which you engage in staple computation. So instead of having to reach out to a database every time to update the database and whatever, you know, store something, and then fetch it again a few moments later when the next event arrives, you keep state in memory and you compute on the fly as data arrives. And that way you get a million times speed up. You also end up with this tremendous cost reduction because you don't end up with as many instances having to compute, by comparison. So let me give you a quick example. If you go to a traffic.swim.ai you can see the real time state of the traffic infrastructure in Palo Alto. And each one of those intersections is predicting its own future. Now, the volume of data from just a few hundred lights in Palo Alto is about four terabytes a day. And sure you can deal with this in AWS Lambda. There are lots and lots of servers up there. But the problem is that the end to end per event latency is about 100 milliseconds. And, you know, if I'm dealing with 30,000 events a second, that's just too much. So solving that problem with a stateless architecture is extraordinarily expensive, more than $5,000 a month. Whereas the staple architecture which you could think of as an evolution of, you know, something reactive or the actor model, gets you, you know something like a 10th of the cost, okay. So cloud is fabulous for things that need to scale wide but a staple model is required for dealing with things which update you rapidly or regularly about their changes in state. >> Yeah, absolutely. You know, I think about if, I mentioned before AI training models, often, if you look at something like autonomous vehicles, the massive amounts of data that it needs to process, you know, has to happen in the public cloud. But then that gets pushed back down to the end device, in this case it's a car, because it needs to be able to react in real time and gets fed at a regular update, the new training algorithms that it has there. What are you seeing-- >> I have strong reason on this training approach and data science in general, and that is that there aren't enough data scientists or, you know, smart people to train these algorithms, deploy them to the edge and so on. And so there is an alternative worldview which is a much simpler one and that is that relatively simple algorithms deployed at scale to staple representatives, let's call them digital twins of things, can deliver enormous improvements in behavior as things learn for themselves. So the way I think the, at least this edge world, gets smarter is that relatively simple models of things will learn for themselves, create their own futures, based on what they can see and then react. And so this idea that we have lots and lots of data scientists dealing with vast amounts of information in the cloud is suitable for certain algorithms but it doesn't work for the vast majority of applications. >> So where are we with the state of what, what do developers need to think about? You mentioned that there's compute in most devices. That's true, but, you know, do they need some special Nvidia chip set out there? Are there certain programming languages that you are seeing more prevalent, interoperability, give us a little bit of, you know, some tips and tricks for those developing. >> Super, so number one, a staple architecture is fundamental and sure React is well known and there are ACA for example, and Spurling. Swim is another so I'm going to use some language and I would encourage you to look at swimos.org to go from play there. A staple architecture, which allows actors, small concurrent objects to stapely evolve their own state based on updates from the real world is fundamental. By the way, in Swim we use data to build these models. So these little agents, for things, we call them web agents because the object ID is a URI, they stapley evolve by processing their own real-world data, stapley representing it, And then they do this wonderful thing which is build a model on the fly. And they build a model by linking to things that they're related to. So a need section would link to all of its sensors but it would also link to all of its neighbors because the neighbors and linking is like a sub in Pub/Sub, and it allows that web agent then to continually analyze, learn, and predict on the fly. And so every one of these concurrent objects is doing this job of analyzing its own raw data and then predicting from that and streaming the result. So in Swim, you get streamed raw data in and what streams out is predictions, predictions about the future state of the infrastructure. And that's a very powerful staple approach which can run all their memory, no storage required. By the way, it's still persistent, so if you lose a node, you can just come back up and carry on but there's no need to store huge amounts of raw data if you don't need it. And let me just be clear. The volumes of raw data from the real world are staggering, right? So four terabytes a day from Palo Alto, but Las Vegas about 60 terabytes a day from the traffic lights. More than 100 million mobile devices is tens of petabytes per day, which is just too much to store. >> Well, Simon, you've mentioned that we have a shortage when it comes to data scientists and the people that can be involved in those things. How about from the developers side, do most enterprises that you're talking to do they have the skillset? Is the ecosystem mature enough for the company to get involved? What do we need to do looking forward to help companies be able to take advantage of this opportunity? >> Yeah, so there is this huge challenge in terms of, I guess, just cloud native skills. And this is exacerbated the more you get added to. I guess what you could think of is traditional kind of companies, all of whom have tons and tons of data sources. So we need to make it easy and Swim tries to do this by effectively using skills that people already have, Java or JavaScript, and giving them easy ways to develop, deploy, and then run applications without thinking about them. So instead of binding developers to notions of place and where databases are and all that sort of stuff if they can write simple object-oriented programs about things like intersections and push buttons, and pedestrian lights, and inroad loops and so on, and simply relate basic objects in the world to each other then we let data build the model by essentially creating these little concurrent objects for each thing, and they will then link to each other and solve the problem. We end up solving a huge problem for developers too, which is that they don't need to acquire complicated cloud-native skillsets to get to work. >> Well absolutely, Simon, it's something we've been trying to do for a long time is to truly simplify things. Want to let you have the final word. If you look out there, the opportunity, the challenge in the space, what final takeaways would you give to our audience? >> So very simple. If you adopt a staple competing architecture, like Swim, you get to go a million times faster. The applications always have an answer. They analyze, learn and predict on the fly and they go a million times faster. They use 10% less, no, sorry, 10% of the infrastructure of a store than analyze approach. And it's the way of the future. >> Simon Crosby, thanks so much for sharing. Great having you on the program. >> Thank you, Stu. >> And thank you for joining I'm Stu Miniman, thank you, as always, for watching theCUBE.

Published Date : Jan 5 2021

SUMMARY :

So happy to welcome back that you and your team and then you analyze it and leverage that data when you need it? And so that gives you a Is it the actual device edge, you know, at the first opportunity you have, so that we can, you and whatever, you know, store something, you know, has to happen or, you know, smart people that you are seeing more and I would encourage you for the company to get involved? the more you get added to. Want to let you have the final word. And it's the way of the future. Great having you on the program. And thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim SchafferPERSON

0.99+

Asim KhanPERSON

0.99+

Steve BallmerPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

David TorresPERSON

0.99+

AmazonORGANIZATION

0.99+

Simon CrosbyPERSON

0.99+

John FurrierPERSON

0.99+

DavidPERSON

0.99+

MicrosoftORGANIZATION

0.99+

SimonPERSON

0.99+

Peter SheldonPERSON

0.99+

LisaPERSON

0.99+

MagentoORGANIZATION

0.99+

2008DATE

0.99+

PagerDutyORGANIZATION

0.99+

CeCeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

sixty percentQUANTITY

0.99+

Hong KongLOCATION

0.99+

EuropeLOCATION

0.99+

10%QUANTITY

0.99+

Las VegasLOCATION

0.99+

thousandsQUANTITY

0.99+

New York CityLOCATION

0.99+

NYCLOCATION

0.99+

2015DATE

0.99+

3.5%QUANTITY

0.99+

PeterPERSON

0.99+

JohnPERSON

0.99+

48 hoursQUANTITY

0.99+

34%QUANTITY

0.99+

2017DATE

0.99+

fiveQUANTITY

0.99+

70%QUANTITY

0.99+

USLOCATION

0.99+

two hoursQUANTITY

0.99+

1.7%QUANTITY

0.99+

twoQUANTITY

0.99+

fifteen percentQUANTITY

0.99+

StuPERSON

0.99+

10thQUANTITY

0.99+

36 hoursQUANTITY

0.99+

CSCORGANIZATION

0.99+

Angry BirdsTITLE

0.99+

700 serversQUANTITY

0.99+

five minutesQUANTITY

0.99+

two guestsQUANTITY

0.99+

200 serversQUANTITY

0.99+

ten percentQUANTITY

0.99+

Suki KuntaPERSON

0.99+

Stu MinimanPERSON

0.99+

20 barsQUANTITY

0.99+

300,000 peopleQUANTITY

0.99+

Jitesh Ghai, Informatica | CUBE Conversation, July 2020


 

>> Narrator: From the Cube Studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hello and welcome back to this CUBE Conversation, I'm John Furrier here in theCUBE Studios, your hosts for our remote interviews as part of our coverage and continue to get the interviews during COVID-19. Great talk and session here about data warehouses, data lakes, data everything, hybrid cloud, and back on theCube for a return Cube alumni, virtual alumni, Jitesh Ghai senior vice president general manager of data management, Informatica. Great to see you come back. We had a great chat about privacy in the last session and data scale. Great to see you again. >> Likewise John, great seeing you is always a pleasure to join you and discuss some of the prevailing topics in the space of data. >> Well it's great that you're available on remote. And thanks for coming back again, because we want to dig into really the digital transformation aspect of the challenges that your customers have specifically around data warehouses and data lakes, because this has become a big topic. What are the biggest challenges that you guys see your customers facing with digital transformation? >> Yeah, great question. Really, it comes down to ensuring every digital transformation should be data-driven. There is a data work stream to help inform thoughtful insights that drive decisions to embark on and realize outcomes from the transformation. And for that you need a healthy, productive, modern, agile, flexible data and analytics stack. And so what we are enabling our customers realize is a modern cloud-native, cloud-first, data and analytics stack built on modern architectures of data lakes and data warehouses, all in the cloud. >> So you mentioned the data warehouse, modern cloud and the data lake. Tell us more about that. What's going on there. How does, how do customers approach that? Because it's not the old fashioned way, and data lakes been around for a while too, by the way, some people call it the data swamp, but they don't take care of it. Talk about those two things and how customers attack that strategic imperative to get it done right? >> Yeah, there's been a tremendous amount of disruption and innovation in the data and analytics stack. And what we're really seeing, I think you mentioned it is, 15 even 20 years ago, they were these things called data marts that the finance teams would report against, for financial reporting, regulatory compliance, et cetera. Then there was this, these things called data warehouses that were bringing together data from across the enterprise for comprehensive enterprise views to run the business as well as to perform reporting. And then with the advent of big data about five years ago, we had Hadoop-based data lakes, which as you mentioned, we're also in many cases, data swamps because of the lack of governance, lack of cataloging and insights into what is in the lake, who should, and shouldn't access the lake. And very quickly that itself got disrupted from Hadoop to Spark. And very quickly customers realize that, hey, you know what? Managing these 5,100, several hundred node, Hadoop lakes, Sparked lakes on-premise is extremely expensive and hardware extremely expensive and people extremely expensive and maintaining and patching and et cetera, et cetera. And so the demand very rapidly shifted to cloud-first, cloud-native data lakes. Equally, we're seeing customers realize the benefits of cloud-first cloud-native, the flexibility, the elasticity, the agility. And we're seeing them realize their data warehouses and reporting in the cloud as well for the same elastic benefits for performance as well as for economics. >> So what is the critical capabilities needed to be successful with kind of a modern data warehouse or a data lake that's a last to can scaling and providing value? What are those critical capabilities required to be successful? >> For sure, exactly. It's first and foremost cloud-first cloud-native, but, why are we Informatica, uniquely positioned and excited to enable, this modernization of the data and analytics stack in the cloud, as it comes down to foundational capabilities that we're recognized as a leader in, across the three magic quadrants of metadata management, data integration and data quality. Oftentimes, when folks are prototyping, they immediately start hand coding and, putting some data together through some ingestion, basic ingestion capability. And they think that they're building a data Lake or populating a data warehouse, but to truly build a system of record, you need comprehensive data management, integration and data quality capabilities. And that's really what we're offering to our customers as a cloud-first cloud-native. So that it's not just your data lakes and data warehouses that are cloud-first cloud-native. So is your data management stack so that you get the same flexibility, agility, resiliency, benefits. >> I don't think many people are really truly understand how important what you just said is the cloud-native capabilities. In addition to some of those things, it's really imperative to be built for the future. So with that, can you give me a couple of examples of customers that you can showcase to illustrate, the success of having the critical capabilities from Informatica. >> Yeah, what we've found is an enabler to be data-driven, requires organizations to bring data together to various applications and various sources of data on-premise in the cloud from SaaS apps, from a cloud PaaS databases, as well as from on-premise databases on-premise applications. And that's typically done in a data lake architecture. It's in that architecture that you have multiple zones of curation, you have a landing zone, a prep zone, and then it's certified datasets that you can democratize. And we spoke about some of this previously under the topic of data governance and privacy. What we are enabling with these capabilities of metadata management data integration, data quality is onboarding all of this data comprehensively processing it and getting it ready for analytics teams for data science teams. Kelly Services for example, is managing the recruitment of over a half a million candidates using greater data-driven insights within their data lake architecture, leveraging our integration quality metadata management capabilities to realize these outcomes. AXA XL is doing very similar things with their data lake and data warehousing architecture, to inform, the data science teams or more productive underwriting. So a tremendous amount of data-driven insights, being data-driven, being a data-driven organization really comes down to this foundational architecture of cloud data warehousing and data lakes, and the associated cloud-first cloud-native data management that we're enabling our customers, realize these, realize that becoming a data-driven organization. >> Okay, Jitesh, I got to put you on the spot on this one. I'm a customer pretend for a minute I'm a customer. I say, okay, I'm comfortable with my old fashion. My grandfather's data warehouse had it for years. It spits out the reports it needs to spit out, data lake I'm really not, I got it, I got a bunch of servers. Maybe we'll put our toe in the water there and try it out, but I'm good right now. I'm not sure I'm ready to go there. My boss is telling me, I'm telling them I'm good. I got a cloud strategy with Microsoft. I've got a cloud strategy with AWS on paper. We're going to go that way, but I'm not going to move. I need to just stay where I'm at. What do you say to that customer? First of all, I don't think anyone's that kind of that, well unless they're really in the legacy world, but may be they're locked in, but for the most part, they're saying, hey, I'm not ready to move. >> We see, we see both. We see the spectrum. We of course, to us data management, being cloud-first being cloud-native, necessitates that your capability support hybrid architectures. So there is a, there are a class of customers that for potentially regulatory compliance reasons, typically financial services, certainly comes to mind where they're decidedly, align state of their estate is on-premise. It's an old fashioned data centers. Well, those customers, we have market leading capabilities that we've had for many, many, many, many, many years. And that's fine. That works too. But we're naturally seeing organizations, even banks and financial services awakened to all the obvious benefits of a cloud-first strategy and are starting to modernize various pieces. First, it was just decommissioning data centers and moving their application and analytics and data estate to the cloud, as it's bring your own licenses as we refer to it. That very quickly, it has modernized to, I want to leverage the past data offerings within an AWS within an Azure, within a GCP. I want to leverage this modern data warehouse from Snowflake. And there, that's when customers are realizing this benefit and realizing the acceleration of value they can get by unshackling themselves from the burden of managing servers, managing the software, the operating system, as well as the associated applications, databases that need to be administered, upgraded, et cetera, abstracting away all of that so that they can really focus on the problem of data, collecting it, processing it, and enabling the larger lines of business to be data-driven, enabling those digital transformations that we were speaking about earlier. >> Well, I know you mentioned a Snowflake. I think they're actually hot company in Silicon Valley. They filed to go public. Everyone I've talked to loves working with them. They're easy to use and I think they're eating into Redshift a little bit from Amazon side. Certainly anyone's using old school data warehouses, Oh, they look at Snowflake is great. How does a customer who wants to get to that kind of experience set up for that? There's some that you guys do. We've had many conversations with some of the leaders at Informatica about this and your board members, and you've got to, you've got to set the foundation and you've got to get this done right. Take us through what it takes to do that. I mean, timetable, are we talking months, weeks, days, is that a migration for a year? It depends on how big it is, but if I do want to take that step to set my company up for these kinds of large cloud scale cloud-native benefits. >> Yeah, great question, great question John. Really, how customers approach it varies significantly. We have a segment of the market that really just picks up, our trial version free, but we have a freemium embedded within the Snowflake experience so that you can select us within as a Snowflake administrator and select us as the data management tooling that you want to use to start ingesting and onboarding and processing data within the Snowflake platform. We have customers that are building net new data warehouses for a line of business like marketing. Where they need, enterprise class, enterprise scale, data management as they service capabilities. And that's where we enable and support them. We also see customers recognizing that their on-premise data and analytics stack their cloud data Lake or their cloud data warehouse is too expensive, is not delivering on the latest and greatest features or the necessary insights. And therefore they are migrating that on-premise data warehouse to a cloud-native data warehouse, like Snowflake, like Redshift, BigQuery and so forth. And that's where we have technologies and capabilities that have helped them build this on-premise data warehouse, the business logic, all the ETL, the processing that was authored on-premise. We have a way of converting that and repurposing it within our cloud-first cloud-native metaphors, so that they get the benefit of continued value from their existing estate, but within a modern cloud-first cloud-native paradigm, that's elastic that serverless and so forth. >> Jitesh, always great to speak with you. You've got a great thought leadership, just an expertise, but also leading a big group within Informatica around data warehouses and data management in general, that you're the GM as well, you've got a PNL responsibility. Thanks for coming on. I do want to ask you while I got you here to react to some of the news, and how it means what it means for the enterprise. So I just did a panel session on Sunday. My new, "meet the analysts segment show" I'm putting together around the EU's recent decision to shoot down the privacy shield law in the UK, mainly because of the data sharing. GDPR is kicking in, California is doing something here. It kind of teases out the broader trend of data sharing, right? And responsibility. Well, I'm going to surveil you. You're going to say, it's not necessarily related to Informatica, so to speak, but it does kind of give a tell sign that, this idea of having your data to be managed so you can have kinds of the policies you need to be adaptive to. It turns out no one knows what's going on. I got data over here. I got data over there. So it's kind of data all over the place. And you know, one law says this, the other law contradicts it, tons of loopholes, but it points out what can happen when data gets out of control. >> Yeah, and then that's exactly right. And that's why, when I say metadata management is a critical foundational capability to build these modern data and analytics architectures it's because metadata management enables cataloging and understanding where all your data is, how it's proliferating and ensuring that it enables that it also enables governance as a result, because metadata management gives you technical metadata. It gives you business metadata. The combination on all of these different types of metadata enabled you to have an organized view of your data state, enable you to plan on how you want the process, manage work with the data and who you can and cannot share that data with. And that's that governing framework that enables organizations to be data-driven to democratize data, but within a governance framework. So extremely critical, but to democratize data, to be more data-driven you also need the govern data. And that's how metadata management with integration and quality really bring things together. >> And to have a user experience that's agile and modern contemporary, you got to have the compliance governance, but you've got to enable the application developers or the use cases to not be waiting. You got to be fast. >> That's exactly right. In this new modern world, digital transformation, faster pace, everybody wants to be data-driven. And that spans a spectrum of deeply technical data engineers, data analysts, data scientists, all the way to nontechnical business users that want to do some ad hoc analytics and want the data when they want it. And it's critical. We have built that on a foundation of intelligent metadata, or what we call a CLAIRE engine, and we have built the fit for use deliberate experiences. What are the appropriate personas, the deeply technical ones, wanting more technical experiences, all the way to nontechnical business users just want data in a simple data marketplace type of shopping paradigm. So critical to meet the UX requirements, the user experience requirements for there's a varied group of data consumers. >> Great to have you on I'll let you have the last word. Talk to the people who are watching this that may be a customer of yours, or may be in the need to be a customer of Informatica. What's your pitch? What would you say to that customer? Why Informatica? Give the pitch. >> Informatica is a laser focused singularly focused on the problem of data management. We are independent and neutral. So we work with your corporate standard, whether it's AWS, Azure, GCP, your best of breed selections, whether it's Snowflake or Databricks. And in many cases, we see the global 2000 select multiple cloud vendors. One division goes with AWS and other goes with Azure. And so the world of data analytics is decidedly multicloud. It's, while we recognize that data is proliferating everywhere, and there are multiple technologies and multiple PaaS offerings from various cloud vendors where data may reside including on-premise you want, and while all of that might be fragmented, you want a single data management capability within your organization that brings together metadata management, integration quality, and is increasingly automating the job of data management, leveraging AI and ML. So that in this data 4.0 world, Informatica is enabling AI power data management, so that you can get faster insights and be more data-driven and deliver more business outcomes. >> Jitesh Ghai, senior vice president, and general manager of data management at Informatica. You're watching our virtual coverage and remote interviews with all the Informatica thought leaders and experts and senior executives and customers here on theCUBE I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jul 22 2020

SUMMARY :

Narrator: From the Cube and continue to get the of the prevailing topics aspect of the challenges And for that you need a healthy, call it the data swamp, data marts that the finance of the data and analytics of customers that you can and the associated cloud-first but I'm not going to move. databases that need to be There's some that you guys do. is not delivering on the of the policies you need to be more data-driven you And to have a user What are the appropriate personas, or may be in the need to be And so the world of data and general manager of data

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jitesh GhaiPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

InformaticaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SundayDATE

0.99+

John FurrierPERSON

0.99+

JiteshPERSON

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

July 2020DATE

0.99+

AXA XLORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

two thingsQUANTITY

0.99+

5,100QUANTITY

0.99+

EUORGANIZATION

0.99+

BostonLOCATION

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

CubeORGANIZATION

0.99+

Cube StudiosORGANIZATION

0.99+

One divisionQUANTITY

0.99+

Kelly ServicesORGANIZATION

0.99+

UKLOCATION

0.98+

over a half a million candidatesQUANTITY

0.98+

DatabricksORGANIZATION

0.97+

SnowflakeTITLE

0.96+

HadoopTITLE

0.96+

singleQUANTITY

0.94+

theCUBE StudiosORGANIZATION

0.94+

SnowflakeORGANIZATION

0.93+

one lawQUANTITY

0.93+

data martsORGANIZATION

0.92+

SparkTITLE

0.92+

five years agoDATE

0.91+

GCPORGANIZATION

0.91+

a yearQUANTITY

0.9+

20 years agoDATE

0.89+

AzureORGANIZATION

0.84+

theCubeORGANIZATION

0.83+

CUBEORGANIZATION

0.82+

2000QUANTITY

0.79+

RedshiftORGANIZATION

0.76+

three magicQUANTITY

0.76+

AzureTITLE

0.76+

15DATE

0.72+

SaaSTITLE

0.72+

-19OTHER

0.71+

hundredQUANTITY

0.66+

COVIDEVENT

0.66+

aboutDATE

0.66+

BigQueryORGANIZATION

0.66+

theCUBEORGANIZATION

0.58+

4.0QUANTITY

0.58+

GDPRTITLE

0.57+

CaliforniaLOCATION

0.55+

yearsQUANTITY

0.55+

CUBE ConversationEVENT

0.43+

ConversationEVENT

0.36+

Itumeleng Monale, Standard Bank | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome back to the cube this is Dave Volante and you're watching a special presentation data ops enacted made possible by IBM you know what's what's happening is the innovation engine in the IT economy is really shifted used to be Moore's Law today it's applying machine intelligence and AI to data really scaling that and operationalizing that new knowledge the challenges that is not so easy to operationalize AI and infuse it into the data pipeline but what we're doing in this program is bringing in practitioners who have actually had a great deal of success in doing just that and I'm really excited to have it Kumal a Himalayan Manali is here she's the executive head of data management or personal and business banking at Standard Bank of South Africa the tomb of length thanks so much for coming in the queue thank you for having me Dave you're very welcome and first of all how you holding up with this this bovid situation how are things in Johannesburg um things in Johannesburg are fine we've been on lockdown now I think it's day 33 if I'm not mistaken lost count and but we're really grateful for the swift action of government we we only I mean we have less than 4,000 places in the country and infection rate is is really slow so we've really I think been able to find the curve and we're grateful for being able to be protected in this way so all working from home or learning the new normal and we're all in this together that's great to hear why don't you tell us a little bit about your your role you're a data person we're really going to get into it but here with us you know how you spend your time okay well I head up a date operations function and a data management function which really is the foundational part of the data value chain that then allows other parts of the organization to monetize data and liberate it as as as the use cases apply we monetize it ourselves as well but really we're an enterprise wide organization that ensures that data quality is managed data is governed that we have the effective practices applied to the entire lineage of the data ownership and curation is in place and everything else from a regulatory as well as opportunity perspective then is able to be leveraged upon so historically you know data has been viewed as sort of this expense it's it's big it's growing it needs to be managed deleted after a certain amount of time and then you know ten years ago of the Big Data move data became an asset you had a lot of shadow I people going off and doing things that maybe didn't comply to the corporate ethics probably drove here here you're a part of the organization crazy but talk about that how what has changed but they in the last you know five years or so just in terms of how people approach data oh I mean you know the story I tell my colleague who are all bankers obviously is the fact that the banker in 1989 had to mainly just know debits credits and be able to look someone in the eye and know whether or not they'd be a credit risk or not you know if we lend you money and you pay it back the the banker of the late 90s had to then contend with the emergence of technologies that made their lives easier and allowed for automation and processes to run much more smoothly um in the early two-thousands I would say that digitization was a big focus and in fact my previous role was head of digital banking and at the time we thought digital was the panacea it is the be-all and end-all it's the thing that's gonna make organizations edit lo and behold we realized that once you've gotten all your digital platforms ready they are just the plate or the pipe and nothing is flowing through it and there's no food on the face if data is not the main photo really um it's always been an asset I think organizations just never consciously knew that data was that okay so so what sounds like once you've made that sort of initial digital transformation you really had to work it and what we're hearing from a lot of practitioners like self as challenges related to that involve different parts of the organization different skill sets of challenges and sort of getting everybody to work together on the same page it's better but maybe you could take us back to sort of when you started on this initiative around data Ops what was that like what were some of the challenges that you faced and how'd you get through them okay first and foremost Dave organizations used to believe that data was I t's problem and that's probably why you you then saw the emergence of things like chatter IP but when you really acknowledge that data is an essay just like money is an asset then you you have to then take accountability for it just the same way as you would any other asset in the organization and you will not abdicate its management to a separate function that's not cold to the business and oftentimes IT are seen as a support or an enabling but not quite the main show in most organizations right so what we we then did is first emphasize that data is a business capability the business function it presides in business makes to product management makes to marketing makes to everything else that the business needs for data management also has to be for to every role in every function to different degrees and varying bearing offense and when you take accountability as an owner of a business unit you also take accountability for the data in the systems that support the business unit for us that was the first picture um and convincing my colleagues that data was their problem and not something that we had to worry about they just kind of leave us to to it was was also a journey but that was kind of the first step into it in terms of getting the data operations journey going um you had to first acknowledge please carry on no you just had to first acknowledge that it's something you must take accountability of as a banker not just need to a different part of the organization that's a real cultural mindset you know in the game of rock-paper-scissors you know culture kinda beats everything doesn't it it's almost like a yep a trump card and so so the businesses embrace that but but what did you do to support that is there has to be trust in the data that it has to be a timeliness and so maybe you could take us through how you achieve those objectives and maybe some other objectives that business the man so the one thing I didn't mention Dave is that obviously they didn't embrace it in the beginning it wasn't a it wasn't there oh yeah that make sense they do that type of conversation um what what he had was a few very strategic people with the right mindset that I could partner with that understood the case for data management and while we had that as as an in we developed a framework for a fully matured data operations capability in the organization and what that would look like in a target date scenario and then what you do is you wait for a good crisis so we had a little bit of a challenge in that our local regulator found us a little bit wanting in terms of our date of college and from that perspective it then brought the case for data quality management so now there's a burning platform you have an appetite for people to partner with you and say okay we need this to comply to help us out and when they start seeing their opt-in action do they then buy into into the concept so sometimes you need to just wait for a good Christ and leverage it and only do that which the organization will appreciate at that time you don't have to go Big Bang data quality management was the use case at the time five years ago so we focused all our energy on that and after that it gave us leeway and license really bring to maturity all the other capabilities at the business might not well understand as well so when that crisis hit of thinking about people process in technology you probably had to turn some knobs in each of those areas can you talk about that so from a technology perspective that that's when we partnered with with IBM to implement information analyzer for us in terms of making sure that then we could profile the data effectively what was important for us is to to make strides in terms of showing the organization progress but also being able to give them access to self-service tools that will give them insight into their data from a technology perspective that was kind of I think the the genesis of of us implementing and the IBM suite in earnest from a data management perspective people wise we really then also began a data stewardship journey in which we implemented business unit stewards of data I don't like using the word steward because in my organization it's taken lightly almost like a part-time occupation so we converted them we call them data managers and and the analogy I would give is every department with a P&L any department worth its salt has a FDA or financial director and if money is important to you you have somebody helping you take accountability and execute on your responsibilities in managing that that money so if data is equally important as an asset you will have a leader a manager helping you execute on your data ownership accountability and that was the people journey so firstly I had kind of soldiers planted in each department which were data managers that would then continue building the culture maturing the data practices as as applicable to each business unit use cases so what was important is that every manager in every business unit to the Data Manager focus their energy on making that business unit happy by ensuring that they data was of the right compliance level and the right quality the right best practices from a process and management perspective and was governed and then in terms of process really it's about spreading through the entire ecosystem data management as a practice and can be quite lonely um in the sense that unless the whole business of an organization is managing data they worried about doing what they do to make money and most people in most business units will be the only unicorn relative to everybody else who does what they do and so for us it was important to have a community of practice a process where all the data managers across business as well as the technology parts and the specialists who were data management professionals coming together and making sure that we we work together on on specific you say so I wonder if I can ask you so the the industry sort of likes to market this notion of of DevOps applied to data and data op have you applied that type of mindset approach agile of continuous improvement is I'm trying to understand how much is marketing and how much actually applicable in the real world can you share well you know when I was reflecting on this before this interview I realized that our very first use case of data officers probably when we implemented information analyzer in our business unit simply because it was the first time that IT and business as well as data professionals came together to spec the use case and then we would literally in an agile fashion with a multidisciplinary team come together to make sure that we got the outcomes that we required I mean for you to to firstly get a data quality management paradigm where we moved from 6% quality at some point from our client data now we're sitting at 99 percent and that 1% literally is just the timing issue to get from from 6 to 99 you have to make sure that the entire value chain is engaged so our business partners will the fundamental determinant of the business rules apply in terms of what does quality mean what are the criteria of quality and then what we do is translate that into what we put in the catalog and ensure that the profiling rules that we run are against those business rules that were defined at first so you'd have upfront determination of the outcome with business and then the team would go into an agile cycle of maybe two-week sprints where we develop certain things have stand-ups come together and then the output would be - boarded in a prototype in a fashion where business then gets to go double check that out so that was the first iterate and I would say we've become much more mature at it and we've got many more use cases now and there's actually one that it's quite exciting that we we recently achieved over the end of of 2019 into the beginning of this year so what we did was they I'm worried about the sunlight I mean through the window you look creative to me like sunset in South Africa we've been on the we've been on CubeSat sometimes it's so bright we have to put on sunglasses but so the most recent one which was in in mates 2019 coming in too early this year we we had long kind of achieved the the compliance and regulatory burning platform issues and now we are in a place of I think opportunity and luxury where we can now find use cases that are pertinent to business execution and business productivity um the one that comes to mind is we're a hundred and fifty eight years old as an organization right so so this Bank was born before technology it was also born in the days of light no no no integration because every branch was a standalone entity you'd have these big ledges that transactions were documented in and I think once every six months or so these Ledger's would be taken by horse-drawn carriage to a central place to get go reconcile between branches and paper but the point is if that is your legacy the initial kind of ERP implementations would have been focused on process efficiency based on old ways of accounting for transactions and allocating information so it was not optimized for the 21st century our architecture had has had huge legacy burden on it and so going into a place where you can be agile with data is something that we constantly working toward so we get to a place where we have hundreds of branches across the country and all of them obviously telling to client servicing clients as usual and and not being able for any person needing sales teams or executional teams they were not able in a short space of time to see the impact of the tactic from a database fee from a reporting history and we were in a place where in some cases based on how our Ledger's roll up and the reconciliation between various systems and accounts work it would take you six weeks to verify whether your technique were effective or not because to actually see the revenue hitting our our general ledger and our balance sheet might take that long that is an ineffective way to operate in a such a competitive environment so what you had our frontline sales agents literally manually documenting the sales that they had made but not being able to verify whether that or not is bringing revenue until six weeks later so what we did then is we sat down and defined all the requirements were reporting perspective and the objective was moved from six weeks latency to 24 hours um and even 24 hours is not perfect our ideal would be that bite rows of day you're able to see what you've done for that day but that's the next the next epoch that will go through however um we literally had the frontline teams defining what they'd want to see in a dashboard the business teams defining what the business rules behind the quality and the definitions would be and then we had an entire I'm analytics team and the data management team working around sourcing the data optimising and curating it and making sure that the latency had done that's I think only our latest use case for data art um and now we're in a place where people can look at a dashboard it's a cubed self-service they can learn at any time I see the sales they've made which is very important right now at the time of covert nineteen from a form of productivity and executional competitiveness those are two great use cases of women lying so the first one you know going from data quality 6% the 99% I mean 6% is all you do is spend time arguing about the data bills profanity and then 99% you're there and you said it's just basically a timing issue use latency in the timing and then the second one is is instead of paving the cow path with an outdated you know ledger Barret data process week you've now compressed that down to 24 hours you want to get the end of day so you've built in the agility into your data pipeline I'm going to ask you then so when gdpr hit were you able to very quickly leverage this capability and and apply and then maybe other of compliance edik as well well actually you know what we just now was post TDP our us um and and we got GDP all right about three years ago but literally all we got right was reporting for risk and compliance purposes they use cases that we have now are really around business opportunity lists so the risk so we prioritize compliance report a long time it but we're able to do real-time reporting from a single transaction perspective I'm suspicious transactions etc I'm two hours in Bank and our governor so from that perspective that was what was prioritize in the beginning which was the initial crisis so what you found is an entire engine geared towards making sure that data quality was correct for reporting and regulatory purposes but really that is not the be-all and end-all of it and if that's all we did I believe we really would not have succeeded or could have stayed dead we succeeded because Dana monetization is actually the penis' t the leveraging of data for business opportunity is is actually then what tells you whether you've got the right culture or not you're just doing it to comply then it means the hearts and minds of the rest of the business still aren't in the data game I love this story because it's me it's nirvana for so many years we've been pouring money to mitigate risk and you have no choice do it you know the general council signs off on it the the CFO but grudgingly signs off on it but it's got to be done but for years decades we've been waiting to use these these risk initiatives to actually drive business value you know it kind of happened with enterprise data warehouse but it was too slow it was complicated and it certainly didn't happen with with email archiving that was just sort of a tech balk it sounds like you know we're at that point today and I want to ask you I mean like you know you we talking earlier about you know the crisis gonna perpetuated this this cultural shift and you took advantage of that so we're out who we the the mother nature dealt up a crisis like we've never seen before how do you see your data infrastructure your data pipeline your data ops what kind of opportunities do you see in front of you today as a result of ovid 19 well I mean because of of the quality of kind data that we had now we were able to very quickly respond to to pivot nineteen in in our context where the government put us on lockdown relatively early in in the curve or in the cycle of infection and what it meant is it brought a little bit of a shock to the economy because small businesses all of a sudden didn't have a source of revenue or potentially three to six weeks and based on the data quality work that we did before it was actually relatively easy to be agile enough to do the things that we did so within the first weekend of of lockdown in South Africa we were the first bank to proactively and automatically offer small businesses and student and students with loans on our books a instant three month payment holiday assuming they were in good standing and we did that upfront though it was actually an opt-out process rather than you had to fall in and arrange for that to happen and I don't believe we would have been able to do that if our data quality was not with um we have since made many more initiatives to try and keep the economy going to try and keep our clients in in a state of of liquidity and so you know data quality at that point and that Dharma is critical to knowing who you're talking to who needs what and in which solutions would best be fitted towards various segments I think the second component is um you know working from home now brings an entirely different normal right so so if we had not been able to provide productivity dashboard and and and sales and dashboards to to management and all all the users that require it we would not be able to then validate or say what our productivity levels are now that people are working from home I mean we still have essential services workers that physically go into work but a lot of our relationship bankers are operating from home and that face the baseline and the foundation that we said productivity packing for various methods being able to be reported on in a short space of time has been really beneficial the next opportunity for us is we've been really good at doing this for the normal operational and front line and type of workers but knowledge workers have also know not necessarily been big productivity reporters historically they kind of get an output then the output might be six weeks down the line um but in a place where teams now are not locate co-located and work needs to flow in an edge of passion we need to start using the same foundation and and and data pipeline that we've laid down as a foundation for the reporting of knowledge work and agile team type of metric so in terms of developing new functionality and solutions there's a flow in a multidisciplinary team and how do those solutions get architected in a way where data assists in the flow of information so solutions can be optimally developed well it sounds like you're able to map a metric but business lines care about you know into these dashboards you usually the sort of data mapping approach if you will which makes it much more relevant for the business as you said before they own the data that's got to be a huge business benefit just in terms of again we talked about cultural we talked about speed but but the business impact of being able to do that it has to be pretty substantial it really really is um and and the use cases really are endless because every department finds their own opportunity to utilize in terms of their also I think the accountability factor has has significantly increased because as the owner of a specific domain of data you know that you're not only accountable to yourself and your own operation but people downstream to you as a product and in an outcome depend on you to ensure that the quality of the data you produces is of a high nature so so curation of data is a very important thing and business is really starting to understand that so you know the cards Department knows that they are the owners of card data right and you know the vehicle asset Department knows that they are the owners of vehicle they are linked to a client profile and all of that creates an ecosystem around the plan I mean when you come to a bank you you don't want to be known as a number and you don't want to be known just for one product you want to be known across everything that you do with that with that organization but most banks are not structured that way they still are product houses and product systems on which your data reside and if those don't act in concert then we come across extremely schizophrenic as if we don't know our clients and so that's very very important stupid like I can go on for an hour talking about this topic but unfortunately we're we're out of time thank you so much for sharing your deep knowledge and your story it's really an inspiring one and congratulations on all your success and I guess I'll leave it with you know what's next you gave us you know a glimpse of some of the things you wanted to do pressing some of the the elapsed times and the time cycle but but where do you see this going in the next you know kind of mid term and longer term currently I mean obviously AI is is a big is a big opportunity for all organizations and and you don't get automation of anything right if the foundations are not in place so you believe that this is a great foundation for anything AI to be applied in terms of the use cases that we can find the second one is really providing an API economy where certain data product can be shared with third parties I think that probably where we want to take things as well we are really utilizing external third-party data sources I'm in our data quality management suite to ensure validity of client identity and and and residents and things of that nature but going forward because been picked and banks and other organizations are probably going to partner to to be more competitive going forward we need to be able to provide data product that can then be leveraged by external parties and vice-versa to be like thanks again great having you thank you very much Dave appreciate the opportunity thank you for watching everybody that we go we are digging in the data ops we've got practitioners we've got influencers we've got experts we're going in the crowd chat it's the crowd chat net flash data ops but keep it right there way back but more coverage this is Dave Volante for the cube [Music] you

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
JohannesburgLOCATION

0.99+

1989DATE

0.99+

six weeksQUANTITY

0.99+

Dave VolantePERSON

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

threeQUANTITY

0.99+

24 hoursQUANTITY

0.99+

two-weekQUANTITY

0.99+

6%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

two hoursQUANTITY

0.99+

South AfricaLOCATION

0.99+

less than 4,000 placesQUANTITY

0.99+

99 percentQUANTITY

0.99+

Standard BankORGANIZATION

0.99+

99%QUANTITY

0.99+

21st centuryDATE

0.99+

6QUANTITY

0.99+

second componentQUANTITY

0.99+

hundreds of branchesQUANTITY

0.99+

2019DATE

0.99+

first stepQUANTITY

0.99+

five yearsQUANTITY

0.99+

first bankQUANTITY

0.99+

1%QUANTITY

0.98+

five years agoDATE

0.98+

first timeQUANTITY

0.98+

BostonLOCATION

0.98+

99QUANTITY

0.98+

each departmentQUANTITY

0.98+

firstQUANTITY

0.98+

late 90sDATE

0.97+

six weeks laterDATE

0.97+

todayDATE

0.97+

three monthQUANTITY

0.97+

ten years agoDATE

0.96+

an hourQUANTITY

0.96+

a hundred and fifty eight years oldQUANTITY

0.96+

firstlyQUANTITY

0.95+

second oneQUANTITY

0.95+

first weekendQUANTITY

0.94+

one productQUANTITY

0.94+

nineteenQUANTITY

0.94+

first pictureQUANTITY

0.93+

each business unitQUANTITY

0.91+

eachQUANTITY

0.91+

KumalPERSON

0.89+

single transactionQUANTITY

0.89+

Big BangEVENT

0.88+

first oneQUANTITY

0.88+

once every six monthsQUANTITY

0.87+

2020DATE

0.86+

LedgerORGANIZATION

0.85+

first use caseQUANTITY

0.84+

every branchQUANTITY

0.83+

about three years agoDATE

0.82+

ChristPERSON

0.81+

oneQUANTITY

0.8+

Itumeleng MonalePERSON

0.79+

DevOpsTITLE

0.78+

two great use casesQUANTITY

0.78+

yearsQUANTITY

0.77+

Standard Bank of SouthORGANIZATION

0.76+

DharmaORGANIZATION

0.76+

early this yearDATE

0.74+

l councilORGANIZATION

0.71+

FDAORGANIZATION

0.7+

endDATE

0.69+

this yearDATE

0.68+

Moore's LawTITLE

0.67+

IBM DataOpsORGANIZATION

0.65+

DanaPERSON

0.63+

every businessQUANTITY

0.62+

Daphne Koller, insitro | WiDS Women in Data Science Conference 2020


 

live from Stanford University it's the hue covering Stanford women in data science 2020 brought to you by Silicon angle media hi and welcome to the cube I'm your host Sonia - Garrett and we're live at Stanford University covering wigs women in data science conference the fifth annual one and joining us today is Daphne Koller who is the co-founder who sari is the CEO and founder of in seat row that Daphne welcome to the cube nice to be here Sonia thank you for having me so tell us a little bit about in seat row how you how it you got it founded and more about your role so I've been working in the intersection of machine learning and biology and health for quite a while and it was always a bit of a an interesting journey in that the data sets were quite small and limited we're now in a different world where there's tools that are allowing us to create massive biological data sets that I think can help us solve really significant societal problems and one of those problems that I think is really important is drug discovery development where despite many important advancements the costs just keep going up and up and up and the question is can we use machine learning to solve that problem better and you talk about this more in your keynote so give us a few highlights of what you talked about so in the last you can think of drug discovery and development in the last 50 to 70 years as being a bit of a glass half-full glass half-empty the glass half-full is the fact that there's diseases that used to be a death sentence or of the sentence still a life long of pain and suffering that are now addressed by some of the modern-day medicines and I think that's absolutely amazing the other side of it is that the cost of developing new drugs has been growing exponentially in what's come to be known as Arun was law being the inverse of Moore's Law which is the one we're all familiar with because the number of drugs approved per billion u.s. dollars just keeps going down exponentially so the question is can we change that curve and you talked in your keynote about the interdisciplinary cold to tell us more about that I think in order to address some of the critical problems that were facing one needs to really build a culture of people who work together at from different disciplines each bringing their own insights and their own ideas into the mix so and in seat row we actually have a company that's half-life scientists many of whom are producing data for the purpose of driving machine learning models and the other half are machine learning people and data scientists who are working on those but it's not a handoff where one group produces the data and the other one consumes and interpreted but really they start from the very beginning to understand what are the problems that one could solve together how do you design the experiment how do you build the model and how do you derive insights from that that can help us make better medicines for people and I also wanted to ask you you co-founded Coursera so tell us a little bit more about that platform so I founded Coursera as a result of work that I'd been doing at Stanford working on how technology can make education better and more accessible this was a project that I did here a number of my colleagues as well and at some point in the fall of 2011 there was an experiment let's take some of the content that we've been we've been developing within it's within Stanford and put it out there for people to just benefit from and we didn't know what would happen would it be a few thousand people but within a matter of weeks with minimal advertising other than one New York Times article that went viral we had a hundred thousand people in each of those courses and that was a moment in time where you know we looked at this and said can we just go back to writing more papers or is there an incredible opportunity to transform access to education to people all over the world and so I ended up taking a what was supposed to be a teary leave of absence from Stanford to go and co-found Coursera and I thought I'd go back after two years but the but at the end of that two-year period the there was just so much more to be done and so much more impact that we could bring to people all over the world people of both genders people of the different social economic status every single country around the world we I just felt like this was something that I couldn't not do and how did you why did you decide to go from an educational platform to then going into machine learning and biomedicine so I've been doing Coursera for about five years in 2016 and the company was on a great trajectory but it's primarily a Content company and around me machine learning was transforming the world and I wanted to come back and be part of that and when I looked around I saw machine learning being applied to ecommerce and the natural language and to self-driving cars but there really wasn't a lot of impact being made on the life science area and I wanted to be part of making that happen partly because I felt like coming back to our earlier comment that in order to really have that impact you need to have someone who speaks both languages and while there's a new generation of researchers who are bilingual in biology and in machine learning there's still a small group and there very few of those in kind of my age cohort and I thought that I would be able to have a real impact by building and company in the space so it sounds like your background is pretty varied what advice would you give to women who are just starting college now who may be interested in a similar field would you tell them they have to major in math or or do you think that maybe like there are some other majors that may be influential as well I think there's a lot of ways to get into data science math is one of them but there's also statistics or physics and I would say that especially for the field that I'm currently in which is at the intersection of machine learning data science on the one hand and biology and health on the other one can get there from biology or medicine as well but what I think is important is not to shy away from the more mathematically oriented courses in whatever major you're in because that found the is a really strong one there's a lot of people out there who are basically lightweight consumers of data science and they don't really understand how the methods that they're deploying how they work and that limits them in their ability to advance the field and come up with new methods that are better suited perhaps to the problems that they're tackling so I think it's totally fine and in fact there's a lot of value to coming into data science from fields other than a third computer science but I think taking courses in those fields even while you're majoring in whatever field you're interested in is going to make you a much better person who lives at that intersection and how do you think having a technology background has helped you in in founding your companies and has helped you become a successful CEO in companies that are very strongly Rd focused like like in C tro and others having a technical co-founder is absolutely essential because it's fine to have an understanding of whatever the user needs and so on and come from the business side of it and a lot of companies have a business co-founder but not understanding what the technology can actually do is highly limiting because you end up hallucinating oh if we could only do this and yet that would be great but you can't and people end up oftentimes making ridiculous promises about what technology will or will not do because they just don't understand where the land mines sit and and where you're gonna hit real obstacles and in the path so I think it's really important to have a strong technical foundation in these companies and that being said where do you see an teacher in the future and and how do you see it solving say Nash that you talked about in your keynote so we hope that in seat row we'll be a fully integrated drug discovery and development company that is based on a slightly different foundation than a traditional pharma company where they grew up in the old approach of that is very much bespoke scientific analysis of the biology of different diseases and then going after targets or our ways of dealing with the disease that are driven by human intuition where I think we have the opportunity to go today is to build a very data-driven approach that collects massive amounts of data and then let analysis of those data really reveal new hypotheses that might not be the ones that the cord with people's preconceptions of what matters and what doesn't and so hopefully we'll be able to over time create enough data and apply machine learning to address key bottlenecks in the drug discovery development process so we can bring better drugs to people and we can do it faster and hopefully at much lower cost that's great and you also mentioned in your keynote that you think that 2020s is like a digital biology era so tell us more about that so I think if you look if you take a historical perspective on science and think back you realize that there's periods in history where one discipline has made a tremendous amount of progress in a relatively short amount of time because of a new technology or a new way of looking at things in the 1870s that discipline was chemistry was the understanding of the periodic table and that you actually couldn't turn lead into gold in the 1900s that was physics with understanding the connection between matter and energy and between space and time in the 1950s that was computing where silicon chips were suddenly able to perform calculations that up until that point only people have been able to do and then in 1990s there was an interesting bifurcation one was the era of data which is related to computing but also involves elements statistics and optimization of neuroscience and the other one was quantitative biology in which biology moved from a descriptive science of techsan amaizing phenomena to really probing and measuring biology in a very detailed and a high-throughput way using techniques like microarrays that measure the activity of 20,000 genes at once Oh the human genome sequencing of the human genome and many others but these two feels kind of evolved in parallel and what I think is coming now 30 years later is the convergence of those two fields into one field that I like to think of as digital biology where we are able using the tools that have and continue to be developed measure biology in entirely new levels of detail of fidelity of scale we can use the techniques of machine learning and data science to interpret what we're seeing and then use some of the technologies that are also emerging to engineer biology to do things that it otherwise wouldn't do and that will have implications in biomaterials in energy in the environment in agriculture and I think also in human health and it's an incredibly exciting space to be in right now because just so much is happening and the opportunities to make a difference and make the world a better place are just so large that sounds awesome Daphne thank you for your insight and thank you for being on cute thank you I'm so neat agario thanks for watching stay tuned for more great

Published Date : Mar 3 2020

SUMMARY :

in the last you can think of drug

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Daphne KollerPERSON

0.99+

SoniaPERSON

0.99+

DaphnePERSON

0.99+

1950sDATE

0.99+

1990sDATE

0.99+

Sonia - GarrettPERSON

0.99+

2016DATE

0.99+

20,000 genesQUANTITY

0.99+

1900sDATE

0.99+

1870sDATE

0.99+

two fieldsQUANTITY

0.99+

one fieldQUANTITY

0.99+

Stanford UniversityORGANIZATION

0.99+

StanfordORGANIZATION

0.99+

CourseraORGANIZATION

0.98+

2020sDATE

0.98+

both languagesQUANTITY

0.98+

both gendersQUANTITY

0.98+

twoQUANTITY

0.98+

fall of 2011DATE

0.98+

two-yearQUANTITY

0.98+

todayDATE

0.97+

about five yearsQUANTITY

0.96+

30 years laterDATE

0.93+

every single countryQUANTITY

0.93+

WiDS Women in Data Science Conference 2020EVENT

0.93+

oneQUANTITY

0.91+

one disciplineQUANTITY

0.9+

a hundred thousand peopleQUANTITY

0.9+

NashPERSON

0.89+

sariPERSON

0.89+

eachQUANTITY

0.84+

Silicon angle mediaORGANIZATION

0.83+

few thousand peopleQUANTITY

0.83+

billion u.s. dollarsQUANTITY

0.83+

two yearsQUANTITY

0.82+

New York TimesORGANIZATION

0.8+

one of those problemsQUANTITY

0.79+

Moore's LawTITLE

0.79+

one groupQUANTITY

0.79+

CourseraTITLE

0.78+

2020DATE

0.77+

70 yearsQUANTITY

0.76+

third computerQUANTITY

0.74+

fifth annual oneQUANTITY

0.68+

each of those coursesQUANTITY

0.68+

scienceEVENT

0.68+

lot of peopleQUANTITY

0.66+

halfQUANTITY

0.64+

perQUANTITY

0.49+

last 50DATE

0.46+

ArunTITLE

0.4+

Wendy Mars, Cisco | Cisco Live EU Barcelona 2020


 

>>Live from Barcelona, Spain. It's the Cube covering Cisco Live 2020 right to you by Cisco and its ecosystem partners. >>Welcome back, everyone to the Cube's live coverage Day four of four days of wall to wall action here in Barcelona, Spain, for Cisco Live. 2020. I'm John Furrier with my co host Dave Volante, with a very special guest here to wrap up Cisco Live. The president of Europe, Middle East Africa and Russia. Francisco Wendy Mars Cube Alumni. Great to see you. Thanks for coming on to. I kind of put a book into the show here. Thanks for joining us. >>It's absolutely great to be here. Thank you. >>So what a transformation. As Cisco's business model of continues to evolve, we've been saying brick by brick, we still think big move coming. I think there's more action. I can sense the walls talking to us like Cisco live in the US and more technical announcement. In the next 24 months, you can see you can see where it's going. It's cloud, it's APS. It's policy based program ability. It's really a whole another business model shift for you and your customers. Technology shift in the business model shift. So I want to get your perspective this year. Opening. Keynote. Oh, you let it off Talking about the philosophy of the business model, but also the first presenter was not a networking guy. It was an application person. App dynamics. Yep, this is a shift. What's going on with Cisco? What's happening? What's the story? >>You know, if if you look for all of the work that we're doing is is really driven by what we see from requirements from our customers to change, that's happening in the market and it is all around. You know, if you think digital transformation is the driver organizations now are incredibly interested in, how do they capture that opportunity? How do they use technology to help them? But, you know, if you look at it, really, there's the three items that are so important it's the business model evolution. It's actually the business operations for for organizations. Plus, there people, they're people in the communities within that those three things working together. And if you look at it with, it's so exciting with application dynamics there because if you look for us within Cisco, that linkage off the application layer through into the infrastructure into the network. And bringing that linkage together is the most powerful thing because that's the insights and the value our customers are looking for. >>You know, we've been talking about the the innovation sandwich, you know, you got data in the middle and you've got technology and applications underneath. That's kind of what's going on here, but I'm glad you brought up the part about business model. This is operations and people in communities. During your keynote, you had a slide that laid out three kind of pillars. Yes, people in communities, business model and business operations. There was no 800 series in there. There was no product discussions. This is fundamentally the big shift that business models are changing. I tweeted provocatively, the killer wrap in digital business model. Because you think about it. The applications are the business. What's running under the covers is the technology, but it's all shifting and changing, so every single vertical every single business is impacted by. This is not like a certain secular thing in the industry. This is a real change. Can you describe how those three things are operating with that can >>sure. I think if you look from, you know, so thinking through those three areas. If you look at the actual business model itself, our business models is organizations are fundamentally changing and they're changing towards as consumers. We are all much more specific about what we want. We have incredible choice in the market. We are more informed than ever before. But also we are interested in the values of the organizations that we're getting the capability from us as well as the products and the services that naturally we're looking to gain. So if you look in that business model itself, this is about, you know, organizations making sure they stay ahead from a competitive standpoint about the innovation of portfolio that they're able to bring, but also that they have a strong, strong focus around the experience, that they're customer gains from an application, a touch standpoint that all comes through those different channels, which is at the end of the day, the application. Then if you look as to how do you deliver that capability through the systems, the tools, the processes? As we all evolve, our businesses have to change the dynamic within your organization to cope with that. And then, of course, in driving any transformation, the critical success factor is your people and your culture. You need your teams with you. The way teams operate now is incredibly different. It's no longer command and control. It's agile capability coming together. You need that to deliver on any transformation. Never, never mind. Let it be smooth, you know, in the execution they're all three together. >>But what I like about that model and I have to say, this is, you know, 10 years of doing the Cube, you see that marketing in the vendor community often leads what actually happens. Not surprising as we entered the last decade, there's a lot of talk about Cloud. Well, it kind of was a good predictor. We heard a lot about digital transformation. A lot of people roll their eyes and think it's a buzzword, but we really are. I feel like exiting this cloud era into the digital era. It feels, really, and there are companies that get it and are leaning in. There are others that maybe you're complacent. I'm wondering what you're seeing in Europe just in terms of everybody talks digital, every CEO wants to get it right. But there is complacency. Their financial services said Well, I'm doing pretty well, not on my watch. Others say, Hey, we want to be the disruptors and not get disrupted. What are you seeing in the region? In terms of that sentiment, >>I would say across the region, you know, there will always be verticals and industries that slightly more advanced than others. But I would say that the bulk of conversations that I'm engaged in independence of the industry or the country in which we're having that conversation in there is a acceptance off transfer. Digital transformation is here. It is affecting my business. I if I don't disrupt, I myself will be disrupted and we challenged Help me. So I You know, I'm not disputing the end state and the guidance and support soon drive the transition and risk mitigated manner, and they're looking for help in that there's actually pressure in the board room now around a what are we doing within within organizations within the enterprise service, right of the public sector, any type of style of company. There's that pressure point in the board room of Come on, we need to move it speed. >>Now the other thing about your model is technology plays a role and contribute. It's not the be all end. All that plays a role in each of those the business model of business operations developing and nurturing communities. Can you add more specifics? What role do you see technology in terms of advancing those three years? >>So I think, you know, if you look at it, technology is fundamental to all of those fears in regard. Teoh Theo innovation that differentiation technology could bring the key challenges. One being able to apply it in a manner where you can really see differentiation of value within the business. So and then the customer's organization. Otherwise, it's technology for the sake of technology. So we see very much a movement now to this conversation of talk about the use case, the use cases, the way by which that innovation could be used to deliver value to the organization on also different ways by which a company will work. Look at the collaboration Kate Capability that we announced earlier this week of helping to bring to life that agility. Look at the the APP D discussion of helping the link the layer of the application into the infrastructure of the network to get to root, cause identification quickly and to understand where you may have a problem before you actually arises and causes downtime many, many ways. >>I think the agility message has always been a technical conversation. Agile methodology, technology, softer development, No problem check. That's 10 years ago. But business agility is moving from a buzz word to reality. Exactly. That's what you're kind of getting. >>Their teams have. Teams operate, how they work and being able to be quick, efficient, stand up, stand down and operate in that way. >>You know, we were kind of thinking out loud on the Cube and just riffing with Fabio Gori on your team on Cisco's team about clarification with you, Gene Kim around kind of real time. What was interesting is we're like, Okay, it's been 13 years since the iPhone, and so 13 years of mobile in your territory in Europe, Middle East Africa mobility has been around before the iPhone, so more advanced data privacy much more advanced in your region. So you you you have a region that's pretty much I think, the tell signs for what's going on North American around the world. And so you think about that. You say Okay, how is value created? How the economics changing this is really the conversation about the business model is okay. If the value activities are shifting and being more agile and the economics are changing with SAS, if someone's not on this bandwagon is not an end state discussion, very. It's done Deal. >>Yeah, it's But I think also there were some other conversation which, which are very prevalent here, is in the region so around trust around privacy law, understanding compliance. If you look at data where data resides, portability of that data GDP our came from Europe has pushed out on those conversations will continue as we go over time. And if I also look at, you know, the dialogue that you saw, you know, within World Economic Forum around sustainability that is becoming a key discussion now within government here in Spain, you know, from a climate standpoint and many other areas >>as well. David, I've been riffing around this whole where the innovation is coming from. It's coming from your region, not so much the us US. We've got some great innovations. But look at Blockchain. Us is like, don't touch it pretty progressive outside United States. A little dangerous to, But that's where innovation is coming from, and this is really the key that we're focused on. I want to get your thoughts on. How do you see it going? Next level? The next level. Next. Gen Business model. What's your What's your vision? >>So I think there'll be lots of things if we look at things like it with the introduction. Introduction of artificial intelligence, Robotics capability five g of course, you know, on the horizon we have Mobile World Congress here in Barcelona a few weeks time. And if you talked about with the iPhone, the smartphone, of course, when four g was introduced, no one knew what the use case where that would be. It was the smartphone, which wasn't around at that time. So with five G and the capability there, that will bring again yet more change to the business model for different organizations and capability and what we can bring to market >>the way we think about AI privacy data ownership becomes more important. Some of the things you were talking about before. It's interesting what you're saying. John and Wendy, the GDP are set this standard and and you're seeing in the US they're stovepipes for that standard California is gonna do want every state is gonna have a difference, and that's going to slow things down. It's going to slow down progress. Do you see sort of an extension of GDP, our like framework of being adopted across the region, potentially accelerating some of these sticky issues and public policy issues that can actually move the market forward? >>I think I think that will because I think there'll be more and more if you look at this is terminology of data. Is the new oil What do you do with data? How do you actually get value from that data? Make intelligent business decisions around that? So, yeah, that's critical. But yet if you look for all of ours, we are extremely passionate about where's our data used again? Back to trust and privacy. You need compliance, you need regulation. And I think this is just the beginning off how we will see that >>evolving. You know, when you get your thoughts. David, I've been riffing for 10 years around the death of storage. Long live storage. But data needs to be stored somewhere. Networking is the same kind of conversation just doesn't go away. In fact, there's more pressure now to get the smartphone. That was 13 years ago, before that. Mobility, data and Video. Now super important driver. That's putting more pressure on you guys. And so hey, we did well, networking. So it's kind of like Moore's Law. More networking, more networking. So video and data are now big your thoughts on video and data video. >>But if you look out the Internet of the future, you know what? So if you look for all of us now, we are also demanding as individuals around capability and access of. That's an Internet of the future. The next phase. We want even more so they'll be more more requirement for speed availability, that reliability of service, the way by which we engage in we communicate. There's some fundamentals there, so continuing to grow, which is which is so, so exciting force. >>So you talk about digital transformation that's obviously in the mind of C level executives. I got to believe security is up. There is a topic one other. What's the conversation like in the corner office when you go visit your customers? >>So I think there's a There's a huge excitement around the opportunity, realizing the value of the of the opportunity on. You know, if you look at top of mind conversations around security around, making sure that you can make taint, maintain that fantastic customer experience because if you don't the customer go elsewhere, How do you do that? How do you enrich at all times and also looking at market? Jason sees, you know, as you go in a new tour at senior levels, within, within organizations independent of the industry in which they're in. They're a huge amount of commonalities that we see across those of consistent problems by which organizations are trying to solve. And actually, one of the big questions is what's the pace of change that I should operate us on? When is it too fast? And one is one of my too slow and trying to balance that is exciting but also a challenge for a company. >>So you feel like sentiment. There's still strong, even though we're 10 years into this, this bull market you get Brexit, China tensions with US US elections. But but generally you see sentiment still pretty strong demand. >>So I would say that the the the excitement around technology, the opportunity that is there around technology in its broader sense is greater than ever before. And I think it's on all of us to be able to help organizations to understand how they can consume and see value from us. But it's a fantastic times, >>gets economic indicators way. So >>I know you >>have to be careful, >>but really, the real I think I'm trying to get to is is the mindset of the CEO. The corner office right now is it is that we're gonna we're gonna grow short term by cutting or do we going to be aggressive and go after this incremental opportunity? And it's probably both. You see a lot of automation in cars >>both, and I think if you look fundamentally for organizations, it's it's the three things helped me to make money, how to save money, keep me out of trouble. So those are the pivots they all operate with on, you know, depending on where an organization is in its journey, whether they're start up there in the middle, the more mature and some of the different dynamics and the markets in which they operate in a well, there's all different variables, you know? So it's it's it's mixed. >>Wendy, thanks so much to spend the time to come on. The Cube really appreciate great keynote folks watching. If you haven't seen the keynote opening section, that's good. Second, the business model. I think it's really right on. I think that's gonna be a conversation will continue. So thanks for sharing that before we look. Before we leave, I want to just ask a question around, What? What's going on for you here in Barcelona? As the show winds down, you had all your activities. Take us in the day in the life of what you do. Customer meetings. What were some of those conversations? Take us inside inside. What? What goes on for you here? >>I tell you, it's been an amazing It's been amazing few days, So it's a combination of customer conversations around some of the themes We just talked about conversations with partners. There's investor companies that we invest in a Cisco that I've been spending some time with on also spending time with the teams as well. The definite zone, you know, is amazing. We have this afternoon the closing session where we got a fantastic, um, external guests who's coming in is going to be really exciting as well. And then, of course, the party tonight and will be announcing the next location, which I'm not going to reveal now. Later on today, >>we kind of figured it out because that's our job is to break news, but we're not gonna break it for you to have that. Hey, thank you so much for coming on. Really appreciate. When any market in Europe, Middle East Africa and Russia for Cisco she's got her hand on the pulse and the future is the business model. That's what's going on. Fundamentally radical change across the board in all areas. This is the Cube, bringing you all the action here in Barcelona. Thanks for watching. >>Yeah, yeah,

Published Date : Jan 30 2020

SUMMARY :

Cisco Live 2020 right to you by Cisco and its ecosystem I kind of put a book into the show here. It's absolutely great to be here. In the next 24 months, you can see you can see where it's going. And if you look at it with, it's so exciting with application dynamics there because if you look for us within You know, we've been talking about the the innovation sandwich, you know, you got data I think if you look from, you know, so thinking through those three areas. But what I like about that model and I have to say, this is, you know, 10 years of doing the Cube, So I You know, I'm not disputing the end state and the guidance and support soon drive the transition What role do you see technology in terms of advancing those So I think, you know, if you look at it, technology is fundamental to all of those fears in regard. I think the agility message has always been a technical conversation. Teams operate, how they work and being able to be quick, So you you you have a region that's pretty much I think, the tell signs for what's going on And if I also look at, you know, the dialogue that you saw, How do you see it going? intelligence, Robotics capability five g of course, you know, on the horizon we have Mobile World Congress Some of the things you were talking about before. Is the new oil What do you do with data? You know, when you get your thoughts. But if you look out the Internet of the future, you know what? What's the conversation like in the corner office when you go visit your customers? You know, if you look at top of mind conversations around security So you feel like sentiment. the opportunity that is there around technology in its broader sense is greater than ever before. So but really, the real I think I'm trying to get to is is the mindset both, and I think if you look fundamentally for organizations, it's it's the three things helped me As the show winds down, you had all your activities. of course, the party tonight and will be announcing the next location, which I'm not going to reveal now. This is the Cube, bringing you all the action here in Barcelona.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VolantePERSON

0.99+

JasonPERSON

0.99+

CiscoORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

EuropeLOCATION

0.99+

Wendy MarsPERSON

0.99+

John FurrierPERSON

0.99+

SpainLOCATION

0.99+

Gene KimPERSON

0.99+

Fabio GoriPERSON

0.99+

13 yearsQUANTITY

0.99+

JohnPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

RussiaLOCATION

0.99+

10 yearsQUANTITY

0.99+

WendyPERSON

0.99+

United StatesLOCATION

0.99+

Barcelona, SpainLOCATION

0.99+

USLOCATION

0.99+

bothQUANTITY

0.99+

SecondQUANTITY

0.99+

Kate CapabilityPERSON

0.99+

BrexitEVENT

0.99+

13 years agoDATE

0.99+

tonightDATE

0.99+

oneQUANTITY

0.99+

three yearsQUANTITY

0.99+

Middle East AfricaLOCATION

0.98+

three thingsQUANTITY

0.98+

three itemsQUANTITY

0.98+

SASORGANIZATION

0.98+

three areasQUANTITY

0.98+

Francisco WendyPERSON

0.97+

10 years agoDATE

0.97+

todayDATE

0.96+

this yearDATE

0.95+

North AmericanLOCATION

0.95+

four daysQUANTITY

0.93+

threeQUANTITY

0.93+

earlier this weekDATE

0.93+

eachQUANTITY

0.92+

Mobile World CongressEVENT

0.92+

800 seriesQUANTITY

0.91+

this afternoonDATE

0.9+

Cisco LiveEVENT

0.9+

last decadeDATE

0.9+

Moore's LawTITLE

0.87+

CubeORGANIZATION

0.84+

five gOTHER

0.84+

MarsORGANIZATION

0.82+

Day fourQUANTITY

0.79+

three kindQUANTITY

0.78+

next 24 monthsDATE

0.78+

first presenterQUANTITY

0.73+

CubeCOMMERCIAL_ITEM

0.72+

EULOCATION

0.72+

every single businessQUANTITY

0.72+

Cisco Live 2020EVENT

0.67+

five GTITLE

0.67+

CaliforniaLOCATION

0.66+

USORGANIZATION

0.65+

2020DATE

0.63+

Breaking Analysis: The Trillionaires Club: Powering the Tech Economy


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hello everyone and welcome this week's episode of theCUBE Insights powered by ETR. And welcome to the Trillionaire's Club. In this Breaking Analysis, I want to look at how the big tech companies have really changed the recipe for innovation in the Enterprise. And as we enter the next decade, I think it's important to sort of reset and re-look at how innovation will determine the winners and losers going forward, including not only the sellers of technology but how technology applied will set the stage for the next 50 years of economic growth. Here's the premise that I want to put forth to you. The source of innovation in the technology business has been permanently altered. There's a new cocktail of innovation, if you will, that will far surpass Moore's Law in terms of it's impact on the industry. For 50 years we've marched to the cadence of that Moore's Law, that is the doubling of transistor counts every 18 months, as shown in the left-hand side of this chart. And of course this translated as we know, into a chasing of the chips, where by being first with the latest and greatest microprocessor brought competitive advantage. We saw Moore's Law drive the PC era, the client server era, and it even powered the internet, notwithstanding the effects of Metcalfe's Law. But there's a new engine of innovation or what John Furrier calls the "Innovation Cocktail," and that's shown in the right-hand of this slide where data plus machine intelligence or AI and Cloud are combinatorial technologies that will power innovation for the next 20 plus years. 10 years of gathering big data have put us in a position to now apply AI. Data is plentiful but insights are not and AI unlocks those insights. The Cloud brings three things, agility, scale, and the ability to fail quickly and cheaply. So, it's these three elements and how they are packaged and applied that will in my view determine winners and losers in the next decade and beyond. Now why is this era now suddenly upon us? Well I would argue there are three main factors. One is cheap storage and compute combined with alternative processor types, like GPUs that can power AI. And the era of data is here to stay. This next chart from Dave Moschella's book, "Seeing Digital," really underscores this point. Incumbent organizations born in the last century organized largely around human expertise or processes or hard assets like factories. These were the engines of competitive advantage. But today's successful organizations put data at the core. They live by the mantra of data driven. It is foundational to them. And they organize expertise, processes and people around the data. All you got to do to drive this point home is look at the market caps of the top five public companies in the U.S. Stock Market, Apple, Microsoft, Google, Amazon, and Facebook. I call this chart the Cuatro Comas! as a shout out to Russ Hanneman, the crazy billionaire supporting, was a supporting character in the Silicon Valley series. Now each of these companies, with the exception of Facebook, has hit the trillion dollar club. AWS, like Mr. Hanneman, hit the trillion dollar club status back in September 2018 but fell back down and lost a comma. These five data-driven companies have surpassed big oil and big finance. I mean, the next closest company is Berkshire at 566 billion. And I would argue that if it hadn't been for the fake news scandal, Facebook probably would be right there with these others. Now, with the exception of Apple, these companies, they're not highly valued because of the goods they pump out, rather, and I would argue even in the case of Apple, their highly valued because they're leaders in digital and in the best position to apply machine intelligence to massive stores of data that they've collected. And they have massive scale, thanks to the Cloud. Now, I get that the success of some of these companies is largely driven by the consumer but the consumerization of IT makes this even more relevant, in my opinion. Let's bring in some ETR data to see how this translates into the Enterprise tech world. This chart shows market share from Microsoft, AWS, Apple iPhone, and Google in the Enterprise all the way back to 2010. Now I get that the iPhone is a bit of a stretch here but stick with me. Remember, market share in ETR terms is a measure of pervasiveness in the data set. Look at how Microsoft has held it's ground. And you can see the steady rise of AWS and Google. Now if I superimpose traditional Enterprise players like Cisco, IBM, or Hewlett or even Dell, that is companies that aren't competing with data at the core of their business, you would see a steady decline. I am required to black out January 2020 as you probably remember, but that data will be out soon and made public shortly after ETR exits its self-imposed quiet period. Now Apple iPhone is not a great proxy but Apple, they're not an Enterprise tech company, but it's data that I can show but now I would argue again that Apple's real value and a key determinate of their success going forward, lies in how it uses data and applies machine intelligence at scale over the next decade to compete in apps and digital services, content, and other adjacencies. And I would say these five leaders and virtually any company in the next decade, this applies. Look, digital means data and digital businesses are data driven. Data changes how we think about competition. Just look at Amazon's moves in content, grocery, logistics. Look at Google in automobiles, Apple and Amazon in music. You know, interestingly Microsoft positions this as a competitive advantage, especially in retail. For instance, touting Walmart as a partner, not a competitor, a la Amazon. The point is, that digital data, AI, and Cloud bring forth highly disruptive possibilities and are enabling these giants to enter businesses that previously were insulated from the outsiders. And in the case of the Cloud, it's paying the way. Just look at the data from Amazon. The left bar shows Amazon's revenue. AWS represents only 12% of the total company's turnover. But as you can see on the right-hand side, it accounts for almost half of the company's operating income. So, the Cloud is essentially funding Amazon's entrance into all these other businesses and powering its scale. Now let's bring in some ETR data to show what's happening in the Enterprise in the terms of share shifts. This chart is a double-Y axis that shows spending levels on the left-hand side, represented by the bars, and the average change in spending, represented by the dots. Focus for a second on the dots and the percentages. Container orchestrations at 29% change. Container platforms at 19.7%. These are Cloud-native technologies and customers are voting with their wallets. Machine learning and AI, nearly 18% change. Cloud computing itself still in the 16% range, 10 plus years on. Look at analytics and big data in the double digits still, 10 years into the big data movement. So, you can see the ETR data shows that the spending action is in and around Cloud, AI, and data. And in the red, look at the Moore's Law techs like servers and storage. Now, this isn't to say that those go away. I fully understand you need servers, and storage, and networking, and database, and software to power the Cloud but this data shows that right now, these discreet cocktail technologies are gaining spending momentum. So, the question I want to leave you with is, what does this mean for incumbents? Those that are not digital-natives or not born in the Cloud? Well, the first thing I'd point out is that while the trillionaires, they look invincible today, history suggests that they are not invulnerable. The rise of China, India, open-source, peer-to-peer models, open models, could coalesce and disrupt these big guys if they miss a step or a cycle. The second point I would make is that incumbents are often too complacent. More often than not, in my experience, there is complacency and there will be a fallout. I hear a lot of lip service given to digital and data driven but often I see companies that talk the talk but they don't walk the walk. Change will come and the incumbents will be disrupted and that is going to cause action at the top. The good news is that the incumbents, they don't have to build the tech. They can compete with the disruptors by applying machine intelligence to their unique data sets and they can buy technologies like AI and the Cloud from suppliers. The degree to which they are comfortable buying from these supplies, who may also be competitors, will play out over time but I would argue that building that competitive advantage sooner rather than later with data and learning to apply machine intelligence and AI to their unique businesses, will allow them to thrive and protect their existing businesses and grow. These markets are large and the incumbents have inherent advantages in terms of resources, relationships, brand value, customer affinity, and domain knowledge that if they apply and transform from the top with strong leadership, they will do very, very well in my view. This is Dave Vellante signing out from this latest episode of theCUBE Insights powered by ETR. Thanks for watching everybody. We'll see you next time and please feel free to comment. In my LinkedIn, you can DM me @dvellante and don't forget we turned this into a podcast so check that out at your favorite podcast player. Thanks again.

Published Date : Jan 18 2020

SUMMARY :

From the SiliconANGLE Media office and the ability to fail quickly and cheaply.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

DellORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AppleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave MoschellaPERSON

0.99+

FacebookORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

HewlettORGANIZATION

0.99+

September 2018DATE

0.99+

January 2020DATE

0.99+

19.7%QUANTITY

0.99+

50 yearsQUANTITY

0.99+

29%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

10 plus yearsQUANTITY

0.99+

16%QUANTITY

0.99+

HannemanPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

second pointQUANTITY

0.99+

2010DATE

0.99+

@dvellantePERSON

0.99+

Russ HannemanPERSON

0.99+

566 billionQUANTITY

0.99+

three elementsQUANTITY

0.99+

John FurrierPERSON

0.99+

five leadersQUANTITY

0.99+

MetcalfePERSON

0.99+

Moore's LawTITLE

0.99+

eachQUANTITY

0.98+

Boston, MassachusettsLOCATION

0.98+

last centuryDATE

0.98+

three main factorsQUANTITY

0.98+

next decadeDATE

0.98+

OneQUANTITY

0.98+

Seeing DigitalTITLE

0.97+

Trillionaire's ClubORGANIZATION

0.97+

firstQUANTITY

0.97+

ETRORGANIZATION

0.96+

12%QUANTITY

0.96+

BerkshireLOCATION

0.96+

todayDATE

0.96+

trillion dollarQUANTITY

0.96+

this weekDATE

0.95+

five public companiesQUANTITY

0.95+

ChinaLOCATION

0.94+

CloudTITLE

0.94+

Silicon ValleyLOCATION

0.94+

MooreORGANIZATION

0.94+

U.S.LOCATION

0.94+

three thingsQUANTITY

0.92+

SiliconANGLEORGANIZATION

0.92+

five data-driven companiesQUANTITY

0.88+

first thingQUANTITY

0.87+

IndiaLOCATION

0.85+

LinkedInORGANIZATION

0.85+

yearsQUANTITY

0.79+

nearly 18%QUANTITY

0.78+

Buno Pati, Infoworks io | CUBEConversation January 2020


 

>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hello everyone, and welcome to this CUBE Conversation. You know, theCUBE has been following the trends in the so-called big data space since 2010. And one of the things that we reported on for a number of years is the complexity involved in wrangling and making sense out of data. The allure of this idea of no schema on write and very low cost platforms like Hadoop became a data magnet. And for years, organizations would shove data into a data lake. And of course the joke was it was became a data swamp. And organizations really struggled to realize the promised return on their big data investments. Now, while the cloud certainly simplified infrastructure deployment, it really introduced a much more complex data environment and data pipeline, with dozens of APIs and a mind-boggling array of services that required highly skilled data engineers to properly ingest, shape, and prepare that data, so that it could be turned into insights. This became a real time suck for data pros, who spent 70 to 80% of their time wrestling data. A number of people saw the opportunity to solve this problem and automate the heavy lift of data, and simplify the process to adjust, synchronize, transform, and really prepare data for analysis. And one of the companies that is attacking this challenge is InfoWorks. And with me to talk about the evolving data landscape is Buno Pati, CEO of InfoWorks. Buno, great to see you, thanks for coming in. >> Well thank you Dave, thanks for having me here. >> You're welcome. I love that you're in Palo Alto, you come to MetroWest in Boston to see us (Buno laughs), that's great. Well welcome. So, you heard my narrative. We're 10 years plus into this big data theme and meme. What did we learn, what are some of the failures and successes that we can now build on, from your point of view? >> All right, so Dave, I'm going to start from the top, with why big data, all right? I think this big data movement really started with the realization by companies that they need to transform their customer experience and their operations, in order to compete effectively in this increasingly digital world, right? And in that context, they also realized very quickly that data was the key asset on which this transformation would be built. So given that, you look at this and say, "What is digital transformation really about?" It is about competing with digital disruption, or fending off digital disruption. And this has become, over time, an existential imperative. You cannot survive and be relevant in this world without leveraging data to compete with others who would otherwise disrupt your business. >> You know, let's stay on that for a minute, because when we started the whole big data, covering that big data space, you didn't really hear about digital transformation. That's sort of a more recent trend. So I got to ask you, what's the difference between a business and a digital business, in your view? >> That is the foundational question behind big data. So if you look at a digital native, there are many of them that you can name. These companies start by building a foundational platform on which they build their analytics and data programs. It gives them a tremendous amount of agility and the right framework within which to build a data-first strategy. A data-first strategy where business information is persistently collected and used at every level of the organization. Furthermore, they take this and they automate this process. Because if you want to collect all your data and leverage it at every part of the business, it needs to be a highly automated system, and it needs to be able to seamlessly traverse on-premise, cloud, hybrid, and multi-cloud environments. Now, let's look at a traditional business. In a traditional enterprise, there is no foundational platform. There are things like point tools for ETL, and data integration, and you can name a whole slew of other things, that need to be stitched together and somehow made to work to deliver data to the applications that consume. The strategy is not a data-first strategy. It is use case by use case. When there is a use case, people go and find the data, they gather the data, they transform that data, and eventually feed an application. A process that can take months to years, depending on the complexity of the project that they're trying. And they don't automate this. This is heavily dependent, as you pointed out, on engineering talent, highly skilled engineering talent that is scarce. And they have not seamlessly traversed the various clouds and on-premise environments, but rather fragmented those environments, where individual teams are focused on a single environment, building different applications, using different tools, and different infrastructure. >> So you're saying the digital native company puts data at the core. They organize around that data, as opposed to maybe around a bottling plant, or around people. And then they leverage that data for competitive advantage through a platform that's kind of table stakes. And then obviously there's cultural aspects and other skills that they need to develop, right? >> Yeah, they have an ability which traditional enterprises don't. Because of this choice of a data-first strategy with a foundational platform, they have the ability to rapidly launch analytics use cases and iterate all them. That is not possible in a traditional or legacy environment. >> So their speed to market and time to value is going to be much better than their competition. This gets into the risk of disruption. Sometimes we talk about cloud native and cloud naive. You could talk about digital native and digital naive. So it's hard for incumbents to fend off the disrupters, and then ultimately become disrupters themselves. But what are you seeing in terms of some of the trends where organizations are having success there? >> One of the key trends that we're seeing, or key attributes of companies that are seeing a lot of success, is when they have organized themselves around their data. Now, what do I mean by that? This is usually a high-level mandate coming down from the top of the company, where they're forming centralized groups to manage the data and make it available for the rest of the organization to use. There are a variety of names that are being used for this. People are calling it their data fabric. They're calling it data as a service, which is pretty descriptive of what it ends up being. And those are terms that are all sort of representing the same concept of a centralized environment and, ideally, a highly automated environment that serves the rest of the business with data. And the goal, ultimately, is to get any data at any time for any application. >> So, let's talk a little bit about the cloud. I mentioned up front that the cloud really simplified infrastructure deployment, but it really didn't solve this problem of, we talked about in terms of data wrangling. So, why didn't it solve that problem? And you got companies like Amazon and Google and Microsoft, who are very adept at data. They're some of these data-first companies. Why is it that the cloud sort of in and of itself has not been able to solve this problem? >> Okay, so when you say solve this problem, it sort of begs the question, what's the goal, right? And if I were to very simply state the goal, I would call it analytics agility. It is gaining agility with analytics. Companies are going from a traditional world, where they had to generate a handful of BI and other reporting type of dashboards in a year, to where they literally need to generate thousands of these things in a year, to run the business and compete with digital disruption. So agility is the goal. >> But wait, the cloud is all about agility, is it not? >> It is, when you talk about agility of compute and storage infrastructure. So, there are three layers to this problem. The first is, what is the compute and storage infrastructure? The cloud is wonderful in that sense. It gives you the ability to rapidly add new infrastructure and spin it down when it's not in use. That is a huge blessing, when you compare it to the six to nine months, or perhaps even longer, that it takes companies to order, install, and test hardware on premise, and then find that it's only partially used. The next layer on that is what is the operating system on which my data and analytics are going to be run? This is where Hadoop comes in. Now, Hadoop is inherently complex, but operating systems are complex things. And Spark falls in that category. Databricks has taken some of the complexity out of running Spark because of their sort of manage service type of offering. But there's still a missing layer, which leverages that infrastructure and that operating system to deliver this agility where users can access data that they need anywhere in the organization, without intensely deep knowledge of what that infrastructure is and what that operating system is doing underneath. >> So, in my up front narrative, I talked about the data pipeline a little bit. But I'm inferring from your comments on platform that it's more than just this sort of narrow data pipeline. There's a macro here. I wonder if you could talk about that a little bit. >> Yeah. So, the data pipeline is one piece of the puzzle. What needs to happen? Data needs to be ingested. It needs to be brought into these environments. It has to be kept fresh, because the source data is persistently changing. It needs to be organized and cataloged, so that people know what's there. And from there, pipelines can be created that ultimately generate data in a form that's consumable by the application. But even surrounding that, you need to be able to orchestrate all of this. Typical enterprise is a multi-cloud enterprise. 80% of all enterprises have more than one cloud that they're working on, and on-premise. So if you can't orchestrate all of this activity in the pipelines, and the data across these various environments, that's not a complete solution either. There's certainly no agility in that. Then there's governance, security, lineage. All of this has to be managed. It's not simply creation of the pipeline, but all these surrounding things that need to happen in order for analytics to run at-scale within enterprises. >> So the cloud sort of solved that layer one problem. And you certainly saw this in the, not early days, but sort of mid-days of Hadoop, where the cloud really became the place where people wanted to do a lot of their Hadoop workloads. And it was kind of ironic that guys like Hortonworks, and Cloudera and MapR really didn't have a strong cloud play. But now, it's sort of flipping back where, as you point out, everybody's multi-cloud. So you have to include a lot of these on-prem systems, whether it's your Oracle database or your ETL systems or your existing data warehouse, those are data feeds into the cloud, or the digital incumbent who wants to be a digital native. They can't just throw all that stuff away, right? So you're seeing an equilibrium there. >> An equilibrium between ... ? >> Yeah, between sort of what's in the cloud and what's on-prem. Let me ask it this way: If the cloud is not a panacea, is there an approach that does really solve the problem of different datasets, the need to ingest them from different clouds, on-prem, and bring them into a platform that can be analyzed and drive insights for an organization? >> Yeah, so I'm going to stay away from the word panacea, because I don't think there ever is really a panacea to any problem. >> That's good, that means we got a good roadmap for our business then. (both laugh) >> However, there is a solution. And the solution has to be guided by three principles. Number one, automation. If you do not automate, the dependence on skill talent is never going to go away. And that talent, as we all know, is very very scarce and hard to come by. The second thing is integration. So, what's different now? All of these capabilities that we just talked about, whether it's things like ETL, or cataloging, or ingesting, or keeping data fresh, or creating pipelines, all of this needs to be integrated together as a single solution. And that's been missing. Most of what we've seen is point tools. And the third is absolutely critical. For things to work in multi-cloud and hybrid environments, you need to introduce a layer of abstraction between the complexity of the underlying systems and the user of those systems. And the way to think about this, Dave, is to think about it much like a compiler. What does a compiler do, right? You don't have to worry about what Intel processor is underneath, what version of your operating system you're running on, what memory is in the system. Ultimately, you might-- >> As much as we love assembly code. >> As much as we love assembly code. Now, so take the analogy a little bit further, there was a time when we wrote assembly code because there was no compiler. So somebody had to sit back and say, "Hey, wouldn't it be nice if we abstracted away from this?" (both laugh) >> Okay, so this sort of sets up my next question, which is, is this why you guys started InfoWorks? Maybe you could talk a little bit about your why, and kind of where you fit. >> So, let me give you the history of InfoWorks. Because the vision of InfoWorks, believe it or not, came out of a rear view mirror. Looking backwards, not forwards. And then predicting the future in a different manner. So, Amar Arsikere is the founder of InfoWorks. And when I met him, he had just left Zynga, where he was the general manager of their gaming platform. What he told me was very very simple. He said he had been at Google at a time when Google was moving off of the legacy systems of, I believe it was Netezza, and Oracle, and a variety of things. And they had just created Bigtable, and they wanted to move and create a data warehouse on Bigtable. So he was given that job. And he led that team. And that, as you might imagine, was this massive project that required a high degree of automation to make it all come together. And he built that, and then he built a very similar system at Zynga, when he was there. These foundational platforms, going back to what I was talking about before digital days. When I met him, he said, "Look, looking back, "Google may have been the only company "that needed such a platform. "But looking forward, "I believe that everyone's going to need one." And that has, you know, absolute truth in it, and that's what we're seeing today. Where, after going through this exercise of trying to write machine code, or assembly code, or whatever we'd like to call it, down at the detailed, complex level of an operating system or infrastructure, people have realized, "Hey, I need something much more holistic. "I need to look at this from a enterprise-wide perspective. "And I need to eliminate all of this dependence on," kind of like the cloud plays a role because it eliminates some of the dependence, or the bottlenecks around hardware and infrastructure. "And ultimately gain a lot more agility "than I'm able to do with legacy methodology." So you were asking early on, what are the lessons learned from that first 10 years? And lot of technology goes through these types of cycles of hype and disillusionment, and we all know the curve. I think there are two key lessons. One is, just having a place to land your data doesn't solve your problem. That's the beginning of your problems. And the second is that legacy methodologies do not transfer into the future. You have to think differently. And looking to the digital natives as guides for how to think, when you're trying to compete with them is a wonderful perspective to take. >> But those legacy technologies, if you're an incumbent, you can't just rip 'em and throw 'em out and convert. You going to use them as feeders to your digital platform. So, presumably, you guys have products. You call this space Enterprise Data Ops and Orchestration, EDO2. Presumably you have products and a portfolio to support those higher layer challenges that we talked about, right? >> Yeah, so that's a really important question. No, you don't rip and replace stuff. These enterprises have been built over years of acquisitions and business systems. These are layers, one on top of another. So think about the introduction of ERP. By the way, ERP is a good analogy of to what happened, because those were point tools that were eventually combined into a single system called ERP. Well, these are point capabilities that are being combined into a single system for EDO2, or Enterprise Data Operations and Orchestration. The old systems do not go away. And we are seeing some companies wanting to move some of their workloads from old systems to new systems. But that's not the major trend. The major trend is that new things that get done, the things that give you holistic views of the company, and then analytics based on that holistic view, are all being done on the new platforms. So it's a layer on top. It's not a rip and replace of the layers underneath. What's in place stays in place. But for the layer on top, you need to think differently. You cannot use all the legacy methodologies and just say that's going to apply to the new platform or new system. >> Okay, so how do you engage with customers? Take a customer who's got, you know, on-prem, they've got legacy infrastructure, they don't want to get disrupted. They want to be a digital native. How do you help them? You know, what do I buy from you? >> Yeah, so our product is called DataFoundry. It is a EDO2 system. It is built on the three principles, founding principles, that I mentioned earlier. It is highly automated. It is integrated in all the capabilities that surround pipelines, perhaps. And ultimately, it's also abstracting. So we're able to very easily traverse one cloud to another, or on-premise to the cloud, or even back. There are some customers that are moving some workloads back from the cloud. Now, what's the benefit here? Well first of all, we lay down the foundation for digital transformation. And we enable these companies to consolidate and organize their data in these complex hybrid, cloud, multi-cloud environments. And then generate analytics use cases 10x faster with about tenth of the resource. And I'm happy to give you some examples on how that works. >> Please do. I mean, maybe you could share some customer examples? >> Yeah, absolutely. So, let me talk about Macy's. >> Okay. >> Macy's is a customer of ours. They've been a customer for about, I think about 14 months at this point in time. And they had built a number of systems to run their analytics, but then recognized what we're seeing other companies recognize. And that is, there's a lot of complexity there. And building it isn't the end game. Maintaining it is the real challenge, right? So even if you have a lot of talent available to you, maintaining what you built is a real challenge. So they came to us. And within a period of 12 months, I'll just give you some numbers that are just mind-blowing. They are currently running 165,000 jobs a month. Now, what's a job? A job is a ingestion job, or a synchronization job, or a transformation. They have launched 431 use cases over a period of 12 months. And you know what? They're just ramping. They will get to thousands. >> Scale. >> Yeah, scale. And they have ingested a lot of data, brought in a lot of DataSources. So to do that in a period of 12 months is unheard of. It does not happen. Why is it important for them? So what problem are they trying to solve? They're a retailer. They are being digitally disruptive like (chuckles) no one else. >> They have an Amazon war room-- >> Right. >> No doubt. >> And they have had to build themselves out as a omni-channel retailer now. They are online, they are also with brick and mortar stores. So you take a look at this. And the key to competing with digital disrupters is the customer experience. What is that experience? You're online, how does that meld with your in-store experience? What happens if I buy online and return something in a store? How does all this come together into a single unified experience for the consumer? And that's what they're chasing. So that was the first application that they came to us with. They said, "Look, let us go into a customer 360. "Let us understand the entirety "of that customer's interaction "and touchpoints with our business. "And having done so, we are in a position "to deliver a better experience." >> Now that's a data problem. I mean, different DataSources, and trying to understand 360, I mean, you got data all over the place. >> All over the place. (speaking simultaneously) And there's historical data, there's stuff coming in from, you know, what's online, what's in the store. And then they progress from there. I mean, they're not restricting it to customer experience and selling. They're looking at merchandising, and inventory, and fulfillment, and store operations. Simple problem. You order something online, where do I pull this from? A store or a warehouse? >> So this is, you know, big data 2.0, just to use a sort of silly term. But it's really taking advantage of all the investment. I've often said, you know, Hadoop, for all the criticism it gets, it did lower our cost of getting data into, you know, at least one virtual place. And it got us thinking about how to get insights out of data. And so, what you're describing is the ability to operationalize your data initiatives at scale. >> Yeah, you can absolutely get your insights off of Hadoop. And I know people have different opinions of Hadoop, given their experience. But what they don't have, what these customers have not achieved yet, most of them, is that agility, right? So, how easily can you get your insights off of Hadoop? Do I need to hire a boatload of consultants who are going to write code for me, and shovel data in, and create these pipelines, and so forth? Or can I do this with a click of a button, right? And that's the difference. That is truly the difference. The level of automation that you need, and the level of abstraction that you need, away from this complexity, has not been delivered. >> We did, in, it must have been 2011, I think, the very first big data market study from anybody in the world, and put it out on, you know, Wikibon, free research. And one of the findings was (chuckles) this is a huge services business. I mean, the professional service is where all the money was going to flow because it was so complicated. And that's kind of exactly what happened. But now we're entering, really it seems like a phase where you can scale, and operationalize, and really simplify, and really focus your attention on driving business value, versus making stuff work. >> You are absolutely correct. So I'll give you the numbers. 55% of this industry is services. About 30% is software, and the rest is hardware. Break it down that way. 55%. So what's going on? People will buy a big data system. Call it Hadoop, it could be something in the cloud, it could be Databricks. And then, this is welcome to the world of SIs. Because at this point, you need these SIs to write code and perform these services in order to get any kind of value out of that. And look, we have some dismal numbers that we're staring at. According to Gardner, only 17% of those who have invested in Hadoop have anything in production. This is after how many years? And you look at surveys from, well, pick your favorite. They all look the same. People have not been able to get the value out of this, because it is too hard. It is too complex and you need too many consultants (laughs) delivering services for you to make this happen. >> Well, what I like about your story, Buno, is you're not, I mean, a lot of the data companies have pivoted to AI. Sort of like, we have a joke, ya know, same wine, new bottle. But you're not talking about, I mean sure, machine intelligence, I'm sure, fits in here, but you're talking about really taking advantage of the investments that you've made in the last decade and helping incumbents become digital natives. That sounds like it's at least a part of your mission here. >> Not become digital natives, but rather compete with them. >> Yeah, right, right. >> Effectively, right? >> Yep, okay. >> So, yeah, that is absolutely what needs to get done. So let me talk for a moment about AI, all right? Way back when, there was another wave of AI in the late 80s. I was part of that, I was doing my PhD at the time. And that obviously went nowhere, because we didn't have any data, we didn't have enough compute power or connectivity. Pretty inert. So here it is again. Very little has changed. Except for we do have the data, we have the connectivity, and we have the compute power. But do we really? So what's AI without the data? Just A, right? There's nothing there. So what's missing, even for AI and ML to be, and I believe these are going to be powerful game changers. But for them to be effective, you need to provide data to it, and you need to be able to do so in a very agile way, so that you can iterate on ideas. No one knows exactly what AI solution is going to solve your problem or enhance your business. This is a process of experimentation. This is what a company like Google can do extraordinarily well, because of this foundational platform. They have this agility to keep iterating, and experimenting, and trying ideas. Because without trying them, you will not discover what works best. >> Yeah, I mean, for 50 years, this industry has marched to the cadence of Moore's Law, and that really was the engine of innovation. And today, it's about data, applying machine intelligence to that data. And the cloud brings, as you point out, agility and scale. That's kind of the new cocktail for innovation, isn't it? >> The cloud brings agility and scale to the infrastructure. >> In low risk, as you said, right? >> Yeah. >> Experimentation, fail fast, et cetera. >> But without an EDO2 type of system, that gives you a great degree of automation, you could spend six months to run one experiment with AI. >> Yeah, because-- >> In gathering data and feeding it to it. >> 'Cause if the answer is people and throwing people at the problem, then you're not going to scale. >> You're not going to scale, and you're never going to really leverage AI and ML capabilities. You need to be able to do that not in six months, in six days, right, or less. >> So let's talk about your company a little bit. Can you give us the status, you know, where you're at? As their newly minted CEO, what your sort of goals are, milestones that we should be watching in 2020 and beyond? >> Yeah, so newly minted CEO, I came in July of last year. This has been an extraordinary company. I started my journey with this company as an investor. And it was funded by actually two funds that I was associated with, first being Nexus Venture Partners, and then Centerview Capital, where I'm still a partner. And myself and my other two partners looked at the opportunity and what the company had been able to do. And in July of last year, I joined as CEO. My partner, David Dorman, who used to be CEO of AT&T, he joined as chairman. And my third partner, Ned Hooper, joined as President and Chief Operating Officer. Ned used to be the Chief Strategy Officer of Cisco. So we pushed pause on the funding, and that's about as all-in as a fund can get. >> Yeah, so you guys were operational experts that became investors, and said, "Okay, we're going to dive back in "and actually run the business." >> And here's why. So we obviously see a lot of companies as investors, as they go out and look for funding. There are three things that come together very rarely. One is a massive market opportunity combined with the second, which is the right product to serve that opportunity. But the third is pure luck, timing. (Dave chuckles) It's timing. And timing, you know, it's a very very challenging thing to try to predict. You can get lucky and get it right, but then again, it's luck. This had all three. It was the absolute perfect time. And it's largely because of what you described, the 10 years of time that had elapsed, where people had sort of run the experiment and were not going to get fooled again by how easy this supposed to be by just getting one piece or the other. They recognized that they need to take this holistic approach and deploy something as an enterprise-wide platform. >> Yeah, I mean, you talk about a large market, I don't even know how you do a TAM, what's the TAM? It's data. (laughs) You know, it's the data universe, which is just, you know, massive. So, I have to ask you a question as an investor. I think you've raised, what 50 million, is that right? >> We've raised 50 million. The last round was led by NEA. >> Right, okay. You got great investors, hefty amount. Although, you know, in this day and age, you know, you're seeing just outrageous amounts being raised. Software obviously is a capital efficient business, but today you need to raise a lot of money for promotion, right, to get your name out there. What's your thoughts on, as a Silicon Valley investor, as this wave, I mean, get it while you can, I guess. You know, we're in the 10th year of this boom market. But your thoughts? >> You're asking me to put on my other hat. (Dave laughs) I think companies have, in general, raised too much money at too high a value too fast. And there's a penalty for that. And the down round IPO, which has become fashionable these days, is one of those penalties. It's a clear indication. Markets are very rational, public markets are very rational. And the pricing in a public market, when it's significantly below the pricing of in a private market, is telling you something. So, we are a little old-fashioned in that sense. We believe that a company has to lay down the right foundation before it adds fuel to the mix and grows. You have to have evidence that the machinery that you build, whether it's for sales, or marketing, or other go-to-market activities, or even product development, is working. And if you do not see all of those signs, you're building a very fragile company. And adding fuel in that setting is like flooding the carburetor. You don't necessarily go faster. (laughs) You just-- >> Consume more. >> You consume more. So there's a little bit of, perhaps, old-fashioned discipline that we bring to the table. And you can argue against it. You can say, "Well, why don't you just raise a lot of money, "hire a lot of sales guys, and hope for the best?" >> See what sticks? (laughs) >> Yeah. We are fully expecting to build a large institution here. And I use that word carefully. And for that to happen, you need the right foundation down first. >> Well, that resonates with us east coast people. So, Buno, thanks very much for comin' on theCUBE and sharing with us your perspectives on the marketplace. And best of luck with InfoWorks. >> Thank you, Dave. This has been a pleasure. Thank you for having me here. >> All right, we'll be watching, thank you. And thank you for watching, everybody. This is Dave Vellante for theCUBE. We'll see ya next time. (upbeat music fades out)

Published Date : Jan 14 2020

SUMMARY :

From the SiliconANGLE media office and simplify the process to adjust, synchronize, transform, and successes that we can now build on, that they need to transform their customer experience So I got to ask you, what's the difference and it needs to be able to seamlessly traverse on-premise, and other skills that they need to develop, right? they have the ability to rapidly launch analytics use cases is going to be much better than their competition. for the rest of the organization to use. Why is it that the cloud sort of in and of itself So agility is the goal. and that operating system to deliver this agility I talked about the data pipeline a little bit. All of this has to be managed. And you certainly saw this in the, not early days, the need to ingest them from different clouds, on-prem, Yeah, so I'm going to stay away from the word panacea, That's good, that means we got a good roadmap And the solution has to be guided by three principles. So somebody had to sit back and say, and kind of where you fit. And that has, you know, absolute truth in it, You going to use them as feeders to your digital platform. But for the layer on top, you need to think differently. Take a customer who's got, you know, on-prem, And I'm happy to give you some examples on how that works. I mean, maybe you could share some customer examples? So, let me talk about Macy's. And building it isn't the end game. So to do that in a period of 12 months is unheard of. And the key to competing with digital disrupters you got data all over the place. And then they progress from there. So this is, you know, big data 2.0, and the level of abstraction that you need, And one of the findings was (chuckles) And you look at surveys from, well, pick your favorite. I mean, a lot of the data companies have pivoted to AI. and I believe these are going to be powerful game changers. And the cloud brings, as you point out, that gives you a great degree of automation, and feeding it to it. 'Cause if the answer You need to be able to do that not in six months, Can you give us the status, you know, where you're at? And in July of last year, I joined as CEO. Yeah, so you guys were operational experts And it's largely because of what you described, So, I have to ask you a question as an investor. The last round was led by NEA. right, to get your name out there. You have to have evidence that the machinery that you build, And you can argue against it. And for that to happen, And best of luck with InfoWorks. Thank you for having me here. And thank you for watching, everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

David DormanPERSON

0.99+

GoogleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

ZyngaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

January 2020DATE

0.99+

Ned HooperPERSON

0.99+

Amar ArsikerePERSON

0.99+

six monthsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

2020DATE

0.99+

sixQUANTITY

0.99+

AT&TORGANIZATION

0.99+

BunoPERSON

0.99+

Centerview CapitalORGANIZATION

0.99+

NedPERSON

0.99+

Nexus Venture PartnersORGANIZATION

0.99+

third partnerQUANTITY

0.99+

2011DATE

0.99+

80%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

12 monthsQUANTITY

0.99+

two partnersQUANTITY

0.99+

55%QUANTITY

0.99+

70QUANTITY

0.99+

OracleORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

six daysQUANTITY

0.99+

thousandsQUANTITY

0.99+

first applicationQUANTITY

0.99+

one pieceQUANTITY

0.99+

10th yearQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

InfoWorksORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

nine monthsQUANTITY

0.99+

50 millionQUANTITY

0.99+

two fundsQUANTITY

0.99+

Buno PatiPERSON

0.99+

thirdQUANTITY

0.99+

three thingsQUANTITY

0.99+

firstQUANTITY

0.99+

431 use casesQUANTITY

0.99+

BostonLOCATION

0.99+

NetezzaORGANIZATION

0.99+

secondQUANTITY

0.99+

two key lessonsQUANTITY

0.99+

OneQUANTITY

0.99+

singleQUANTITY

0.98+

three layersQUANTITY

0.98+

late 80sDATE

0.98+

MapRORGANIZATION

0.98+

Boston, MassachusettsLOCATION

0.98+

dozensQUANTITY

0.98+

three principlesQUANTITY

0.98+

10xQUANTITY

0.98+

oneQUANTITY

0.98+

second thingQUANTITY

0.98+

17%QUANTITY

0.98+

2010DATE

0.97+

first 10 yearsQUANTITY

0.97+

ClouderaORGANIZATION

0.97+

todayDATE

0.97+

GardnerPERSON

0.96+

about 14 monthsQUANTITY

0.96+

David Shacochis, CenturyLink & Brandon Sweeney, VMware | AWS re:Invent 2019


 

>>long from Las Vegas. It's the Q covering a ws re invent 2019. Brought to you by Amazon Web service is and in along with its ecosystem partners. >>Welcome back here to AWS reinvent 2019. Great show going on here in Las Vegas, where the Sands were live here on the Cube. Once again, covering it from wall to wall will be here until late tomorrow afternoon. David John Walls were doing by Joined by David. She coaches who is the vice president of product management for hybrid idea Century Lake. Good to see you, You guys and Brandon sweetie, who's the SPP of worldwide cloud sales at Veum With you be with you. This is gonna be a New England sports segment actually surrounded by ruin. Celtics, >>ESPN in Vegas, >>I remind you, the Washington Nationals are the reigning world. Serious shit. Wait a moment. Wait. Shark forever. A moment in time I got stuff. Let's talk about your relationship between via wearing set free like And what brings you here? A WSB offering. You're putting you guys that run on AWS. >>Maybe Maggie jumping and jumping. So look VM wear a long time player in the infrastructure space. Obviously incredible relationship with AWS. Customers want to transform their operations. They want to move to the cloud way have Vienna, where Claude, on a video B s. We continue to take tremendous ground helping customers build and build more agile infrastructure. Make that happen, Van. Where was built on our partners. Right centrally great partner MSP. And we think about helping customers achieve their business outcomes. Key partners like centrally make it happen. You've been a long term partner and done a lot of great things with us. >>Yeah, and really what? What Central Lincoln VM Where have done? I mean, really, we sort of created the manage private cloud market in the early days of managing the Empire solutions for customers, but really were and where we differentiate in other working with GM wear on AWS is really with elements of our network or the ability to take those kinds of solutions and make sure that they're connected to the right networks and that they're tied in and integrated with the customer's existing enterprise and where they want to go as they start to distribute the workload more widely. Because we run that network, we see a lot of the Internet traffic. We see a lot of threat patterns. We see a lot of things emerged with our cyber security capabilities and manage service is. So we add value there. And because of that history with BM wear and in sort of creating that hosted private cloud environment, there's There's a lot of complexity, friendliness inside of our service offer, where we can manage the inn where we can manage it in a traditional model that is cloud verified. And then you could manage it as it starts to move on to the AWS platform. Because, as we all know, and as even you know, Andy has referenced in different points, there's a just about every kind of workload can go to eight of us. But there are still certain things that can't quite go there. And building a hybrid solution basically puts customers in a position to innovate is what a hybrid solution is all about. >>That kind of moves the needle on some of those harder to move working in the M, where is such an obvious place to start? So you try to preserve that existing customer of'em, where customer experience but at the same time you want to bring the cloud experience. So how How is that evolving? >>Yes, it's a couple things, right? So l Tingley customers, they all want to move to the cloud for all the reasons we want security, agility, governance, et cetera. Right, but fundamentally need help. And so partners, like essentially help figure out which workloads are cloud ready, right? And figure that out and then to you, get to know the customer. Really well, begin the relationships that you have, right, and you can help them figure out which workloads am I gonna move right? And then that leads into more relationships on How do I set up d r. Right? How do I offer other service is through eight of us against those work clothes. >>There's a lot of things where being a manage service's provider for a V M were based platform or being. Amanda's service is provided for an AWS platform. There's a lot of things that you have in common, right? First and foremost is that ability toe run your operations securely. You've got to be secure. You know, you need to be able to maintain that bond of trust you need to be auditable. Your your your operations model needs to be something that transparent to the customer. You need to not just be about migrating workloads to the new and exciting environment, but also helping to transform it and take advantage of whether it's a V M where feature tool or next generation eight of us feature it's will. It's not just my great lift and shift, but then helped to transform what that that downstream, long term platform could do. You certainly want Teoh be in a posture where you're building a sense of intimacy with the customer. You're learning their acronyms. You're learning their business processes. You're building up that bond of trust where you can really be flexible with that customer. That's where the MSP community can also come in, because there's a lot of creative things we can do commercially. Contracting wise binding service's together into broader solutions and service level agreements that can go and give the customer something that they could just get by going teach individual technology platform under themselves >>and their ways >>where the service provider community really chips in. >>I think you're right and we think about helping Dr customers success manage service providers because of those engine relationship with customers. We've had tremendous success of moving those workloads, driving consumption of the service and really driving better business outcomes based on those relationships you have. >>So let's talk about workloads, guys. Course. Remember Paul Maritz when he was running the M word? He said Eddie Eddie Workload. Any application called it a device. He called it a software mainframe and Christian marketing people struck that from the parlance. But that's essentially what's happened pretty much run anything on somewhere. I heard Andy Jassy Kino talking about people helping people get off on mainframes. And so I feel like he's building the cloud mainframe. Any work less? But what kind of workloads are moving today? It's not. Obviously, he acknowledged, some of the hard core stuff's not gonna move. He didn't specify, but it's a lot of that hard core database ol TV transit transaction, high risk stuff. But what is moving today? Where do you see that going? >>Don't talk about some customers. >>Yeah, >>so a lot of joint customers we have that. I think you fall into that category. In fact, tomorrow on Thursday, we're actually leading a panel discussion that really dives into some customers. Success on the AWS platform that Central Lincoln are managed service is practice has been able to help them achieve what's interesting about that We have. We have an example from the public sector. We have an example from manufacturing and from from food and beverage example from the transportation industry and airlines. What's really interesting is that in all those use cases that will be diagramming out tomorrow, where VM Where's part of all of them, right? And sometimes it's because I am. Where is a critical part of their existing infrastructure? And so we're trying to be able to do is design, you know, sort of systems of innovation, systems of engagement that they were running inside of an AWS or broadly distributed AWS architecture. But it still needs network integration, security and activity back to the crown jewels and what's kept in a lot of those workloads that already running on the BM where platform So that's a lot of ways. See that a good deal with regards to your moving your sort of innovative workloads, your engagement workload, some of your digital experience, platforms you were working with an airline that wants to start building up a series of initiatives where they want to be able to sell vacation packages and and be very creative in how they market deliver those pulling through airline sails along the way. They're gonna be designing those digital initiatives in AWS, but they need access to flight flight information, schedule information, logistics information that they keep inside of there there. Bm where environment in the centralized data center. And so they're starting to look at workloads like that. We started to look at the N word cloud on eight of us being whereas it a zit in and of itself as a workload moving up to eight of us. There's a range of these solutions that we're starting to see, but a lot of it is still there, and he had the graphic up. There were still, in the very early days of clouded option. I still see a lot of work loads that are moving AWS theater in that system of engagement. How can I digitally engaged with my customers better? That's where a lot of the innovation is going on, and that's what a lot of the workload that are running in launching our >>I mean, we're seeing tremendous momentum and ultimately take any workload, wailed, moving to the cloud right and do it in an efficient and speedy path. And we've got custom moving thousands of workloads, right? They may decide over time to re factor them, but first and foremost, they could move them. They relocate them to the cloud. They can save a lot of costs. Out of that, they can use the exact same interface or pane of glass in terms. How they manage those work clothes, whether they're on Kramer, off Prem. It gives them tremendous agility. And if they decide over time, they have to re factor some workloads, which can be quite costly. They have that option, but there's no reason they shouldn't move. Every single worker today >>is their eyes, their disadvantage at all. If if you're left with ex workloads that have to stay behind, as opposed to someone who's coming up and getting up and running totally on the cloud and they're enjoying all those efficiencies and capabilities, are you a little bit of a disadvantage because you have to keep some legacy things lingering behind, or how do you eventually close that gap to enjoy the benefits of new technologies. Yeah, >>there's a sort of an old saying that, you know, if you're if you're if you're an enterprise, you know, that means you've had to make a lot of decisions along the way, right? And so presumably those decisions added value. It's your enterprise, or else she wouldn't be in enterprise. So it really comes out, too. Yeah, to those systems of records of those legacy systems way talk about legacy systems >>on Lian I t. Is the word legacy. I know it's a positive. United is the word legacy. A majority of >>your legacy is what the value you built up a lot of that, whether it's airline flight data or scheduling, best practices are critical. Crown jewels kind of data systems are really important. It really comes down to it. You're on enterprise and you're competing against somebody that is born in the cloud. How well integrated is everything. And are you able to take advantage of and pace layer your innovation strategy so that you can work on the cloud where it makes sense. You can still take advantage of all the data and intelligence you build up about your customers >>so talking earlier, You guys, it seems like you guys do you see that? That cloud is ultimately the destination of all these workloads. But, you know, Pac thinking about PacBell Singer, he talked about the laws of physics, the laws of economics and the laws of the land so that he makes the case for the hybrid >>Murphy's Law. >>Yeah, so that makes the case for the hybrid world. And it seems like Amazon. To a certain extent, it's capitulating on that, and it seems like we got a long way to go. So it's almost like the cloud model will go to your data wherever it iss. You guys, I think, helped facilitate that. How do you look at that? >>Yes. I mean, part of that answer is how much data centers are becoming sort of an antiquated model right there. There there is a need for computing and storage in a variety of different locations. Right, And there's that we've been sort of going through these cycles back and forth of you use the term software mainframe and the on the Palmer. It's kind, a model of the original mainframe decentralizing out the client server now centralizing again to the cloud as we see it starting to swing back on the other direction for towards devices that are a lot smarter. Processors that are, you know, finally tuned for whatever Internet of things use case that they're being designed for being able to put business logic a whole lot closer to those devices. The data. So I think that is what one of things that I think that said that one of the BM wears. A couple of years ago, data centers were becoming centers of data. And how are you able to go and work with those centers of data? First off, link them all together, networking lies, secure them all together and then manage them consistently. I think that's one of the things I am has been really great about that sort of control playing data plane separation inside your product design that makes that a whole lot more feet. >>I mean, it is a multi cloud, and it's a hybrid cloud world, and we want to give customers of flexibility and choice to move their workloads wherever they need, right based on different decisions, geographic implications, et cetera, security regimens and mean fundamentally. That's where we give customers a tremendous, tremendous amount of flexibility. >>And bringing the edge complicates >>edge, data center or cloud. >>It's so maybe it's not a swing back, you know, because it really has been a pendulum swing, mainframe, decentralized swing back to the cloud. It feels like it's now this ubiquitous push everywhere. >>Pendulum stops. >>Yeah, >>because there's an equal gravitational pull between the power of both locals >>and compute explodes everywhere. You have storage everywhere. So bring me my question of governance, governance, security in the edicts of the organization. You touched on that. So that becomes another challenge. How do you see that playing out what kind of roles you play solving that problem >>on the idea of data governance? Governance? Yeah. I mean the best way to think about our. In our opinion, the best way to think about data governance is that is really with abstraction. Layers and being ableto have a model driven approach to what you're deploying out into the cloud, and you can go all in with the data model that exists in the attraction layers in the date and the model driven architecture that you can build inside things like AWS cloud formations or inside things like answerable and chef and been puppet, their model, different ways of understanding what your application known state should be on. That's the foundational principle of understanding what your workloads are and how you can actually deliver governance over them. Once you've modelled it on and you then know how to deploy it against a variety different platforms, then you're just a matter of keeping track of what you've modelled, where you've deployed it and inventorying those number of instances and how they scale and how healthy there that certainly, from a workload standpoint, I think governance discipline that you need in terms of the actual data itself. Data governance on where data is getting stored There's a lot of innovation here at the show floor. In terms of software to find storage and storage abstractions, the embers got a great software to find storage capability called the San. We're working with a number of different partners within the core of our network, starting to treat storage as sort of a new kind of virtualized network function, using things like sifts and NFS and I scuzzy as V n F that you can run inside the network we want. We have had an announcement here earlier in the week about our central bank's network storage offer. We're actually starting to make storage and the data policy that allows you to control words replicated and where it's stored. Just part of the network service that you can add is a value add >>or even the metadata get the fastest path to get to it if I need to. If I prefer not to move it, you're starting to see you're talking about multiplied this multi cloud world. It seems like the connections between those clouds are gonna be dictated by that metadata and the intelligence tow. You know what the right path is, >>And I think we want to provide the flexibility to figure out where that data needs to reside. Cross cloud on, Prem off from, and you can just hear from the conversation, David, level of intimacy some of our partners have with customers to work through those decisions. Right, if you're gonna move those workloads effectively and efficiently, is where we get a lot of value for our joint customers. >>I mean, she's pretty fundamental to this notion of digital transformation that's ultimately what we've been talking about. Digital transformation is all about data putting data at the core, being able to access that, get insights from it and monetize, not directly, but understand how data affects the monetization of your business. That's what your customers >>and I think we >>wantto. Besides, I think we want to simplify how you want to spend more time looking up. Your applications are looking down your infrastructure, right? Based on all the jury, are drivers across the different business needs. And again, if we can figure out how to simplify that infrastructure, then people could spend more time on the applications because that's how they drive differentiation in the market, right? And so let's simplify infrastructure, put it where it needs to be. But we're going to give you time back to drive innovation and focus on differentiating yourself. >>You know, it's interesting on the topic of digital transformation reindeer. So right, sort of an interesting little pattern that plays out for those of us that have been in the service of writer community for a little while that a lot of the digital transformation success stories that you see that really get a lot of attention around the public cloud like eight of us. The big major moves into going all in on the public cloud tend to come from companies that went all in on the service provider model 10 years ago, the ones that adopted the idea. I'm just gonna have somebody do this non differentiating thing for me so that I can focus on innovation, are then in a better position to go start moving to the cloud as opposed to companies that have been downward focused on their infrastructure. Building up skill sets, building up knowledge base, building up career, path of people that, actually we're thinking about the technology itself as part of their job description have had a hard time letting go. It sort of the first step of trusting the service provider to do it for you lead you to that second step of being able to just leverage and go all in on the public lab. >>And customers need that help, right? And that's where if we can help activate moving those workloads more quickly, we provide that ability, put more focus on innovation to Dr Outcomes. >>I know you're talking about legacy a little bit ago and that the negative connotation, I think. Tom Brady, Don't you think I wanna run number seven? I haven't had a home smiling Would always do it back with more. We continue our coverage here. Live with the cube, where a w s rivet 2019.

Published Date : Dec 5 2019

SUMMARY :

Brought to you by Amazon Web service With you be with you. via wearing set free like And what brings you here? We continue to take tremendous ground helping customers build and build more agile infrastructure. and make sure that they're connected to the right networks and that they're tied in and integrated with the customer's existing That kind of moves the needle on some of those harder to move working in the M, where is such an obvious place to start? And figure that out and then of trust where you can really be flexible with that customer. driving consumption of the service and really driving better business outcomes based on those relationships you have. He called it a software mainframe and Christian marketing people struck that from the And so they're starting to look at workloads like that. They relocate them to the cloud. behind, or how do you eventually close that gap to enjoy the benefits of new technologies. there's a sort of an old saying that, you know, if you're if you're if you're an enterprise, you know, United is the word legacy. And are you able to take advantage of and pace layer your innovation strategy that he makes the case for the hybrid Yeah, so that makes the case for the hybrid world. out the client server now centralizing again to the cloud as we see it starting to swing back on the other direction for That's where we give customers a tremendous, It's so maybe it's not a swing back, you know, because it really has been a pendulum of governance, governance, security in the edicts of the organization. Just part of the network service that you can add is a value add or even the metadata get the fastest path to get to it if I need to. And I think we want to provide the flexibility to figure out where that data needs to reside. I mean, she's pretty fundamental to this notion of digital transformation that's ultimately what we've been talking about. Besides, I think we want to simplify how you want to spend more time looking up. a lot of the digital transformation success stories that you see that really get And that's where if we can help activate moving those workloads Tom Brady, Don't you think I wanna

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

David ShacochisPERSON

0.99+

Paul MaritzPERSON

0.99+

Tom BradyPERSON

0.99+

Las VegasLOCATION

0.99+

ClaudePERSON

0.99+

AndyPERSON

0.99+

New EnglandLOCATION

0.99+

ESPNORGANIZATION

0.99+

eightQUANTITY

0.99+

2019DATE

0.99+

CenturyLinkORGANIZATION

0.99+

David John WallsPERSON

0.99+

BrandonPERSON

0.99+

FirstQUANTITY

0.99+

Andy Jassy KinoPERSON

0.99+

AmandaPERSON

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

ViennaLOCATION

0.99+

second stepQUANTITY

0.98+

late tomorrow afternoonDATE

0.98+

VegasLOCATION

0.98+

10 years agoDATE

0.98+

bothQUANTITY

0.98+

Brandon SweeneyPERSON

0.98+

PacPERSON

0.98+

CelticsORGANIZATION

0.97+

first stepQUANTITY

0.97+

tomorrowDATE

0.97+

firstQUANTITY

0.96+

Washington NationalsORGANIZATION

0.95+

PacBell SingerPERSON

0.94+

VeumORGANIZATION

0.93+

GMORGANIZATION

0.92+

BM wearORGANIZATION

0.92+

KramerORGANIZATION

0.92+

Central LincolnORGANIZATION

0.91+

couple of years agoDATE

0.91+

Amazon WebORGANIZATION

0.9+

VMwareORGANIZATION

0.9+

PremORGANIZATION

0.88+

Century LakeORGANIZATION

0.88+

thousands of workloadsQUANTITY

0.84+

single workerQUANTITY

0.81+

UnitedORGANIZATION

0.8+

Murphy's LawTITLE

0.8+

up to eight ofQUANTITY

0.75+

N wordORGANIZATION

0.75+

Central Lincoln VMORGANIZATION

0.74+

Invent 2019EVENT

0.73+

tomorrow onDATE

0.72+

TingleyPERSON

0.69+

PalmerORGANIZATION

0.68+

MaggiePERSON

0.63+

Eddie WorkloadTITLE

0.62+

more feetQUANTITY

0.59+

ChristianOTHER

0.57+

ThursdayDATE

0.56+

VanPERSON

0.55+

sevenQUANTITY

0.5+

EddiePERSON

0.47+

SandsPERSON

0.38+

Bill Vass, AWS | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel. Along with it's ecosystem partners. >> Okay, welcome back everyone. It's theCUBE's live coverage here in Las Vegas for Amazon Web Series today, re:Invent 2019. It's theCUBE's seventh year covering re:Invent. Eight years they've been running this event. It gets bigger every year. It's been a great wave to ride on. I'm John Furrier, my cohost, Dave Vellante. We've been riding this wave, Dave, for years. It's so exciting, it gets bigger and more exciting. >> Lucky seven. >> This year more than ever. So much stuff is happening. It's been really exciting. I think there's a sea change happening, in terms of another wave coming. Quantum computing, big news here amongst other great tech. Our next guest is Bill Vass, VP of Technology, Storage Automation Management, part of the quantum announcement that went out. Bill, good to see you. >> Yeah, well, good to see you. Great to see you again. Thanks for having me on board. >> So, we love quantum, we talk about it all the time. My son loves it, everyone loves it. It's futuristic. It's going to crack everything. It's going to be the fastest thing in the world. Quantum supremacy. Andy referenced it in my one-on-one with him around quantum being important for Amazon. >> Yes, it is, it is. >> You guys launched it. Take us through the timing. Why, why now? >> Okay, so the Braket service, which is based on quantum notation made by Dirac, right? So we thought that was a good name for it. It provides for you the ability to do development in quantum algorithms using gate-based programming that's available, and then do simulation on classical computers, which is what we call our digital computers today now. (men chuckling) >> Yeah, it's a classic. >> These are classic computers all of a sudden right? And then, actually do execution of your algorithms on, today, three different quantum computers, one that's annealing and two-bit gate-based machines. And that gives you the ability to test them in parallel and separate from each other. In fact, last week, I was working with the team and we had two machines, an ion trap machine and an electromagnetic tunneling machine, solving the same problem and passing variables back and forth from each other, you could see the cloud watch metrics coming out, and the data was going to an S3 bucket on the output. And we do it all in a Jupiter notebook. So it was pretty amazing to see all that running together. I think it's probably the first time two different machines with two different technologies had worked together on a cloud computer, fully integrated with everything else, so it was pretty exciting. >> So, quantum supremacy has been a word kicked around. A lot of hand waving, IBM, Google. Depending on who you talk to, there's different versions. But at the end of the day, quantum is a leap in computing. >> Bill: Yes, it can be. >> It can be. It's still early days, it would be day zero. >> Yeah, well I think if you think of, we're about where computers were with tubes if you remember, if you go back that far, right, right? That's about where we are right now, where you got to kind of jiggle the tubes sometimes to get them running. >> A bug gets in there. Yeah, yeah, that bug can get in there, and all of those kind of things. >> Dave: You flip 'em off with a punch card. Yeah, yeah, so for example, a number of the machines, they run for four hours and then they come down for a half hour for calibration. And then they run for another four hours. So we're still sort of at that early stage, but you can do useful work on them. And more mature systems, like for example D-Wave, which is annealer, a little different than gate-based machines, is really quite mature, right? And so, I think as you go back and forth between these machines, the gate-based machines and annealers, you can really get a sense for what's capable today with Braket and that's what we want to do is get people to actually be able to try them out. Now, quantum supremacy is a fancy word for we did something you can't do on a classical computer, right? That's on a quantum computer for the first time. And quantum computers have the potential to exceed the processing power, especially on things like factoring and other things like that, or on Hamiltonian simulations for molecules, and those kids of things, because a quantum computer operates the way a molecule operates, right, in a lot of ways using quantum mechanics and things like that. And so, it's a fancy term for that. We don't really focus on that at Amazon. We focus on solving customer's problems. And the problem we're solving with Braket is to get them to learn it as it's evolving, and be ready for it, and continue to develop the environment. And then also offer a lot of choice. Amazon's always been big on choice. And if you look at our processing portfolio, we have AMD, Intel x86, great partners, great products from them. We have Nvidia, great partner, great products from them. But we also have our Graviton 1 and Graviton 2, and our new GPU-type chip. And those are great products, too, I've been doing a lot on those, as well. And the customer should have that choice, and with quantum computers, we're trying to do the same thing. We will have annealers, we will have ion trap machines, we will have electromagnetic machines, and others available on Braket. >> Can I ask a question on quantum if we can go back a bit? So you mentioned vacuum tubes, which was kind of funny. But the challenge there was with that, it was cooling and reliability, system downtime. What are the technical challenges with regard to quantum in terms of making it stable? >> Yeah, so some of it is on classical computers, as we call them, they have error-correction code built in. So you have, whether you know it or not, there's alpha particles that are flipping bits on your memory at all times, right? And if you don't have ECC, you'd get crashes constantly on your machine. And so, we've built in ECC, so we're trying to build the quantum computers with the proper error correction, right, to handle these things, 'cause nothing runs perfectly, you just think it's perfect because we're doing all the error correction under the covers, right? And so that needs to evolve on quantum computing. The ability to reproduce them in volume from an engineering perspective. Again, standard lithography has a yield rate, right? I mean, sometimes the yield is 40%, sometimes it's 20%, sometimes it's a really good fab and it's 80%, right? And so, you have a yield rate, as well. So, being able to do that. These machines also generally operate in a cryogenic world, that's a little bit more complicated, right? And they're also heavily affected by electromagnetic radiation, other things like that, so you have to sort of faraday cage them in some cases, and other things like that. So there's a lot that goes on there. So it's managing a physical environment like cryogenics is challenging to do well, having the fabrication to reproduce it in a new way is hard. The physics is actually, I shudder to say well understood. I would say the way the physics works is well understood, how it works is not, right? No one really knows how entanglement works, they just knows what it does, and that's understood really well, right? And so, so a lot of it is now, why we're excited about it, it's an engineering problem to solve, and we're pretty good at engineering. >> Talk about the practicality. Andy Jassy was on the record with me, quoted, said, "Quantum is very important to Amazon." >> Yes it is. >> You agree with that. He also said, "It's years out." You said that. He said, "But we want to make it practical "for customers." >> We do, we do. >> John: What is the practical thing? Is it just kicking the tires? Is it some of the things you mentioned? What's the core goal? >> So, in my opinion, we're at a point in the evolution of these quantum machines, and certainly with the work we're doing with Cal Tech and others, that the number of available cubits are starting to increase at an astronomic rate, a Moore's Law kind of of rate, right? Whether it's, no matter which machine you're looking at out there, and there's about 200 different companies building quantum computers now, and so, and they're all good technology. They've all got challenges, as well, as reproducibility, and those kind of things. And so now's a good time to start learning how to do this gate-based programming knowing that it's coming, because quantum computers, they won't replace a classical computer, so don't think that. Because there is no quantum ram, you can't run 200 petabytes of data through a quantum computer today, and those kind of things. What it can do is factoring very well, or it can do probability equations very well. It'll have affects on Monte Carlo simulations. It'll have affects specifically in material sciences where you can simulate molecules for the first time that you just can't do on classical computers. And when I say you can't do on classical computers, my quantum team always corrects me. They're like, "Well, no one has proven "that there's an algorithm you can run "on a classical computer that will do that yet," right? (men chuckle) So there may be times when you say, "Okay, I did this on a quantum computer," and you can only do it on a quantum computer. But then someone's very smart mathematician says, "Oh, I figured out how to do it on a regular computer. "You don't need a quantum computer for that." And that's constantly evolving, as well, in parallel, right? And so, and that's what's that argument between IBM and Google on quantum supremacy is that. And that's an unfortunate distraction in my opinion. What Google did was quite impressive, and if you're in the quantum world, you should be very happy with what they did. They had a very low error rate with a large number of cubits, and that's a big deal. >> Well, I just want to ask you, this industry is an arms race. But, with something like quantum where you've got 200 companies actually investing in it so early days, is collaboration maybe a model here? I mean, what do think? You mentioned Cal Tech. >> It certainly is for us because, like I said, we're going to have multiple quantum computers available, just like we collaborate with Intel, and AMD, and the other partners in that space, as well. That's sort of the nice thing about being a cloud service provider is we can give customers choice, and we can have our own innovation, plus their innovations available to customers, right? Innovation doesn't just happen in one place, right? We got a lot of smart people at Amazon, we don't invent everything, right? (Dave chuckles) >> So I got to ask you, obviously, we can take cube quantum and call it cubits, not to be confused with theCUBE video highlights. Joking aside, classical computers, will there be a classical cloud? Because this is kind of a futuristic-- >> Or you mean a quantum cloud? >> Quantum cloud, well then you get the classic cloud, you got the quantum cloud. >> Well no, they'll be together. So I think a quantum computer will be used like we used to use a math coprocessor if you like, or FPGAs are used today, right? So, you'll go along and you'll have your problem. And I'll give you a real, practical example. So let's say you had a machine with 125 cubits, okay? You could just start doing some really nice optimization algorithms on that. So imagine there's this company that ships stuff around a lot, I wonder who that could be? And they need to optimize continuously their delivery for a truck, right? And that changes all the time. Well that algorithm, if you're doing hundreds of deliveries in a truck, it's very complicated. That traveling salesman algorithm is a NP-hard problem when you do it, right? And so, what would be the fastest best path? But you got to take into account weather and traffic, so that's changing. So you might have a classical computer do those algorithms overnight for all the delivery trucks and then send them out to the trucks. The next morning they're driving around. But it takes a lot of computing power to do that, right? Well, a quantum computer can do that kind of problemistic or deterministic equation like that, not deterministic, a best-fit algorithm like that, much faster. And so, you could have it every second providing that. So your classical computer is sending out the manifests, interacting with the person, it's got the website on it. And then, it gets to the part where here's the problem to calculate, we call it a shot when you're on a quantum computer, it runs it in a few seconds that would take an hour or more. >> It's a fast job, yeah. >> And it comes right back with the result. And then it continues with it's thing, passes it to the driver. Another update occurs, (buzzing) and it's just going on all the time. So those kind of things are very practical and coming. >> I've got to ask for the younger generations, my sons super interested as I mentioned before you came on, quantum attracts the younger, smart kids coming into the workforce, engineering talent. What's the best path for someone who has an either advanced degree, or no degree, to get involved in quantum? Is there a certain advice you'd give someone? >> So the reality is, I mean, obviously having taken quantum mechanics in school and understanding the physics behind it to an extent, as much as you can understand the physics behind it, right? I think the other areas, there are programs at universities focused on quantum computing, there's a bunch of them. So, they can go into that direction. But even just regular computer science, or regular mechanical and electrical engineering are all neat. Mechanical around the cooling, and all that other stuff. Electrical, these are electrically-based machines, just like a classical computer is. And being able to code at low level is another area that's tremendously valuable right now. >> Got it. >> You mentioned best fit is coming, that use case. I mean, can you give us a sense of a timeframe? And people will say, "Oh, 10, 15, 20 years." But you're talking much sooner. >> Oh, I don't, I think it's sooner than that, I do. And it's hard for me to predict exactly when we'll have it. You can already do, with some of the annealing machines, like D- Wave, some of the best fit today, right? So it's a matter of people want to use a quantum computer because they need to do something fast, they don't care how much it costs, they need to do something fast. Or it's too expensive to do it on a classical computer, or you just can't do it at all on a classical computer. Today, there isn't much of that last one, you can't do it at all, but that's coming. As you get to around 52, 50, 52 cubits, it's very hard to simulate that on a classical computer. You're starting to reach the edge of what you can practically do on a classical computer. At about 125 cubits, you probably are at a point where you can't just simulate it anymore. >> But you're talking years, not decades, for this use case? >> Yeah, I think you're definitely talking years. I think, and you know, it's interesting, if you'd asked me two years ago how long it would take, I would've said decades. So that's how fast things are advancing right now, and I think that-- >> Yeah, and the computers just getting faster and faster. >> Yeah, but the ability to fabricate, the understanding, there's a number of architectures that are very well proven, it's just a matter of getting the error rates down, stability in place, the repeatable manufacturing in place, there's a lot of engineering problems. And engineering problems are good, we know how to do engineering problems, right? And we actually understand the physics, or at least we understand how the physics works. I won't claim that, what is it, "Spooky action at a distance," is what Einstein said for entanglement, right? And that's a core piece of this, right? And so, those are challenges, right? And that's part of the mystery of the quantum computer, I guess. >> So you're having fun? >> I am having fun, yeah. >> I mean, this is pretty intoxicating, technical problems, it's fun. >> It is. It is a lot of fun. Of course, the whole portfolio that I run over at AWS is just really a fun portfolio, between robotics, and autonomous systems, and IOT, and the advanced storage stuff that we do, and all the edge computing, and all the monitor and management systems, and all the real-time streaming. So like Kinesis Video, that's the back end for the Amazon ghost stores, and working with all that. It's a lot of fun, it really is, it's good. >> Well, Bill, we need an hour to get into that, so we may have to come up and see you, do a special story. >> Oh, definitely! >> We'd love to come up and dig in, and get a special feature program with you at some point. >> Yeah, happy to do that, happy to do that. >> Talk some robotics, some IOT, autonomous systems. >> Yeah, you can see all of it around here, we got it up and running around here, Dave. >> What a portfolio. >> Congratulations. >> Alright, thank you so much. >> Great news on the quantum. Quantum is here, quantum cloud is happening. Of course, theCUBE is going quantum. We've got a lot of cubits here. Lot of CUBE highlights, go to SiliconAngle.com. We got all the data here, we're sharing it with you. I'm John Furrier with Dave Vellante talking quantum. Want to give a shout out to Amazon Web Services and Intel for setting up this stage for us. Thanks to our sponsors, we wouldn't be able to make this happen if it wasn't for them. Thank you very much, and thanks for watching. We'll be back with more coverage after this short break. (upbeat music)

Published Date : Dec 4 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel. It's so exciting, it gets bigger and more exciting. part of the quantum announcement that went out. Great to see you again. It's going to be the fastest thing in the world. You guys launched it. It provides for you the ability to do development And that gives you the ability to test them in parallel Depending on who you talk to, there's different versions. It's still early days, it would be day zero. we're about where computers were with tubes if you remember, can get in there, and all of those kind of things. And the problem we're solving with Braket But the challenge there was with that, And so that needs to evolve on quantum computing. Talk about the practicality. You agree with that. And when I say you can't do on classical computers, But, with something like quantum and the other partners in that space, as well. So I got to ask you, you get the classic cloud, you got the quantum cloud. here's the problem to calculate, we call it a shot and it's just going on all the time. quantum attracts the younger, smart kids And being able to code at low level is another area I mean, can you give us a sense of a timeframe? And it's hard for me to predict exactly when we'll have it. I think, and you know, it's interesting, Yeah, and the computers Yeah, but the ability to fabricate, the understanding, I mean, this is and the advanced storage stuff that we do, so we may have to come up and see you, and get a special feature program with you Yeah, happy to do that, Talk some robotics, some IOT, Yeah, you can see all of it We got all the data here, we're sharing it with you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

two machinesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Cal TechORGANIZATION

0.99+

AMDORGANIZATION

0.99+

AndyPERSON

0.99+

BillPERSON

0.99+

Andy JassyPERSON

0.99+

EinsteinPERSON

0.99+

John FurrierPERSON

0.99+

40%QUANTITY

0.99+

DavePERSON

0.99+

Bill VassPERSON

0.99+

GoogleORGANIZATION

0.99+

20%QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

IntelORGANIZATION

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

AWSORGANIZATION

0.99+

an hourQUANTITY

0.99+

four hoursQUANTITY

0.99+

200 companiesQUANTITY

0.99+

10QUANTITY

0.99+

Las VegasLOCATION

0.99+

two-bitQUANTITY

0.99+

15QUANTITY

0.99+

TodayDATE

0.99+

125 cubitsQUANTITY

0.99+

200 petabytesQUANTITY

0.99+

20 yearsQUANTITY

0.99+

two different machinesQUANTITY

0.99+

oneQUANTITY

0.99+

50QUANTITY

0.99+

two different technologiesQUANTITY

0.99+

Eight yearsQUANTITY

0.98+

first timeQUANTITY

0.98+

Monte CarloTITLE

0.98+

todayDATE

0.98+

two years agoDATE

0.98+

52 cubitsQUANTITY

0.97+

BraketORGANIZATION

0.97+

x86COMMERCIAL_ITEM

0.97+

This yearDATE

0.96+

next morningDATE

0.96+

about 125 cubitsQUANTITY

0.95+

Graviton 1COMMERCIAL_ITEM

0.95+

DiracORGANIZATION

0.95+

Graviton 2COMMERCIAL_ITEM

0.94+

about 200 different companiesQUANTITY

0.93+

three different quantum computersQUANTITY

0.93+

Moore's LawTITLE

0.91+

seventh yearQUANTITY

0.9+

decadesQUANTITY

0.87+

secondsQUANTITY

0.86+

every secondQUANTITY

0.85+

re:EVENT

0.82+

half hourQUANTITY

0.81+

Tobi Knaup, D2iQ | D2iQ Journey to Cloud Native 2019


 

(informative tune) >> From San Francisco, it's The Cube. Covering D2 iQ. Brought to you by D2 iQ. (informative tune) >> Hey, welcome back everybody! Jeff Frick here with theCUBE. We're in downtown San Francisco at D2 iQ Headquarters, a beautiful office space here, right downtown. And we're talking about customers' journey to cloud data. We talk about it all the time, you hear about cloud native, everyone's rushing in, Kubernetes is the hottest thing since sliced bread, but the at the end of the day, you actually have to do it and we're really excited to talk to the founder who's been on his own company journey as he's watching his customers' company journeys and really kind of get into it a little bit. So, excited to have Tobi Knaup, he's a co-founder and CTO of D2 iQ. Tobi, great to see you! >> Thanks for having me. >> So, before we jump into the company and where you are now, I want to go back a little bit. I mean, looking through your resume, and your LinkedIn, etc. You're doing it kind of the classic dream-way for a founder. Did the Y Combinator thing, you've been at this for six years, you've changed the company a little bit. So, I wonder if you can just share form a founder's perspective, I think you've gone through four, five rounds of funding, raised a lot of money, 200 plus million dollars. As you sit back now, if you even get a chance, and kind of reflect, what goes through your head? As you've gone through this thing, pretty cool. A lot of people would like this, they think they'd like to be sitting in your seat. (chuckles) What can you share? >> Yeah, it's definitely been, you know, an exciting journey. And it's one that changes all the time. You know, we learned so many things over the years. And when you start out, you create a company, right? A tech company, you have you idea for the product, you have the technology. You know how to do that, right? You know how to iterate that and build it out. But there's many things you don't know as a technical founder with an engineering background, like myself. And so, I always joke with the team internally, this is that, you know, I basically try to fire myself every six months. And what I mean by that, is your role really changes, right? In the very beginning I wrote code and then is tarted managing engineers, when, you know, once you built up the team, then managed engineering managers and then did product and, you know. Nowadays, I spend a lot of time with customers to talk about our vision, you know, where I see the industry going, where things are going, how we fit into the greater picture. So, it's, you know, I think that's a big part of it, it's evolving with the company and, you know, learning the skills and evolving yourself. >> Right. It's just funny cause you think about tech founders and there's some big ones, right? Some big companies out there, to pick on Zuckerberg's, just to pick on him. But you know, when you start and kind of what your vision and your dream is and what you're coding in that early passion, isn't necessarily where you end up. And as you said, your role in more of a leadership position now, more of a guidance and setting strategy in communicating with the market, communicating with customers has changed. Has that been enjoyable for you, do you, you know, kind of enjoy more the, I don't want to say the elder states when you're a young guy, but more kind of that leadership role? Or just, you know, getting into the weeds and writing some code? >> Yeah. Yeah, what always excites me, is helping customers or helping people solve problems, right? And we do that with technology, in our case, but really it's about solving the problems. And the problems are not always technical problems, right? You know, the software that is at the core of our products, that's been running in production for many years and, you know, in some sense, what we did before we founded the company, when I worked at Airbnb and my co-founders worked at, you know, Airbnb and Twitter, we're still helping companies do those same things today. And so, where we need to help the most sometimes, it's actually on education, right? So, solving those problems. How do you train up, you know, a thousand or 10 thousand internal developers at a large organization, on what are containers, what is container management, cluster management, how does cloud native work? That's often the biggest challenge for folks and, you know, how did they transform their processes internally, how did they become really a cloud native organization. And so, you know, what motivates me is helping people solve problems in, whatever, you know, shape or form. >> Right >> It's funny because it's analogous to what you guys do, in that you got an open-source core, but people, I think, are often underestimate the degree of difficulty around all the activities beyond just the core software. >> Mm-hmm. >> Whether, as you said, it's training, it's implementation it's integration, it's best practices, it's support, it's connecting all these things together and staying on top of it. So, I think, you know, you're in a great position because it's not the software. That's not the hard part, that's arguably, the easy part. So, as you've watched people, you know, deal with this crazy acceleration of change in our industry and this rapid move to cloud native, you know, spawned by the success of the public clouds, you know, how do you kind of stay grounded and not jump too fast at the next shiny object, but still stay current, but still, you know, kind of keep to your kneading in terms of your foundation of the company and delivering real value for the customers? >> Yeah. Yeah, I know, it's exactly right. A lot of times, the challenges with adopting open-sourcing enterprise are, for example, around the skills, right? How do you hire a team that can manage that deployment and manage it for many years? Cause once software's introduced in an enterprise, it typically stays for a couple of years, right? And this gets especially challenging when you're using very popular open-source project, right? Because you're competing for those skills with, literally, everybody, right? A lot of folks want to deploy these things. And then, what people forget sometimes too is, so, a lot of the leading open-source projects, in the cloud native space, came out of, you know, big software companies, right? Kubernetes came from Google, Kafka came from LinkedIn, Cassandra from Facebook. And when those companies deploy these systems internally, they have a lot of other supporting infrastructure around it, right? And a lot of that is centered around day-two operations. Right? How do you monitor these things, how do you do lock management, how do you do do change management, how do you upgrade these things, keep current? So, all of that supporting infrastructure is what an enterprise also needs to develop in order to adopt open-source software and that's a big part of what we do. >> Right. So, I'd love to get your perspective. So, you said, you were at Airbnb, your founders were at Twitter. You know, often people, I think enterprises, fall into the trap of, you know, we want to be like the hyper-scale guys, you know. We want to be like Google or we want to be like Twitter. But they're not. But I'm sure there's a lot of lessons that you learned in watching the hyper-growth of Airbnb and Twitter. What are some of those ones that you can bring and hep enterprises with? What are some of the things that they should be aware of as, not necessarily maybe their sales don't ramp like those other companies, but their operations in some of these new cloud native things do? >> Right, right. Yeah, so, it's actually, you know, when we started the company, the key or one of the drivers was that, you know, we looked at the problems that we solved at Airbnb and Twitter and we realized that those problems are not specific to those two companies or, you know, Silicon Valley tech companies. We realized that most enterprises in the future will have, will be facing those problems. And a core one is really about agility and innovation. Right? Marc Andreessen, one of our early investors, said, "Software is eating the world." he wrote that up many years ago. And so, really what that means is that most enterprises, most companies on the planet, will transform into a software company. With all of that entails, right? With he agility that software brings. And, you know, if they don't do that, their competitors will transform into a software company and disrupt them. So, they need to become software companies. And so, a lot of the existing processes that these existing companies have around IT, don't work in that kind of environment, right? You just can't have a situation where, you know, a developer wants to deploy a new application that, you know, is very, you know, brings a lot of differentiation for the business, but the first thing they need to do in order to deploy that is file a ticket with IT and then someone will get to it in three months, right? That is a lot of waste of time and that's when people surpass you. So, that was one of the key-things we saw at Airbnb and Twitter, right? They were also in that old-school IT approach, where it took many months to deploy something. And deploying some of the software we work with, got that time down to even minutes, right? So it's empowering developers, right? And giving them the tools to make them agile so they can be innovative and bring the business forward. >> Right. The other big issue that enterprises have that you probably didn't have in some of those, you know, kind of native startups, is the complexity and the legacy. >> That's right. >> Right? So you've got all this old stuff that may or may not make any sense to redeploy, you've got stuff (laughing) stuff running in data centers, stuff running on public clouds, everybody wants to get the hyper-cloud to have a single point of view. So, it's a very different challenge when you're in the enterprises. What are you seeing, how are you helping them kind of navigate through that? >> Yeah, yeah. So, one of the first thongs we did actually, so, you know, most of our products are sort of open-core products. They have a lot of open-source at the center, but then, you know, we add enterprise components around that. Typically the first thing that shows up is around security, right? Putting the right access controls in place, making sure the traffic is encrypted. So, that's one of the first things. And then often, the companies we work with, are in a regulated environment, right? Banks, healthcare companies. So, we help them meet those requirements as well and often times that means, you know, adding features around the open-source products to get them to that. >> Right. So, like you said, the world has changed even in the six or seven years you've been at this. The, you know, containers, depending who you talk to, were around, not quite so hot. Docker's hot, Kubernetes is hot. But one of the big changes that's coming now, looking forward, is IOT and EDGE. So, you know, you just mentioned security, from the security point of view, you know, now you're tax services increased dramatically, we've done some work with Forescout and their secret sauce and they just put a sniffer on your network and find the hundreds and hundreds of devices (laughs)-- >> Yeah. >> That you don't even know are on your network. So do you look forward to kind of the opportunity and the challenges of IOT supported by 5G? What's that do for your business, where do you see opportunities, how are you going to address that? >> Yeah, so, I think IOT is really one of those big mega-trends that's going to transform a lot of things and create all kinds of new business models. And, really, what IOT is for me at the core, it's all around data, right? You have all these devices producing data, whether those are, you know, sensors in a factory in a production line, or those have, you know, cars on the road that send telemetry data in real time. IOT has been, you know, a big opportunity for us. We work with multiple customers that are in the space. And, you know, one fundamental problem with it is that, with IOT, a lot of the data that organizations need to process, are now, all of a sudden generated at the EDGE of the network, right? This wasn't the case many years for enterprises, right? Most of the data was generated, you know, at HQ or in some internal system, not at the EDGE of the network. And what always happens is when, with large-volume data is, compute generally moves where the data is and not the other way around. So, for many of these deployments, it's not efficient to move all that data from those IT devices to a central-cloud location or data-center location. So, those companies need to find ways to process data at the EDGE. That's a big part of what we're helping them with, it's automating real-time data services and machine-learning services, at the EDGE, where the EDGE can be, you know, factories all around the world, it could be cruise ships, it could be other types of locations where working with customers. And so, essentially what we're doing is we're bringing the automation that people are used to from the public cloud to the EDGE. So, you know, with the click of a button or a single command you can install a database or a machine-learning system or a message queue at all those EDGE locations. And then, it's not just that stuff is being deployed at the EDGE, I think the, you know, the standard type of infrastructure-mix, for most enterprises, is a hybrid one. I think most organizations will run a mix of EDGE, their data centers and typically multiple public cloud providers. And so, they really need a platform where they can manage applications across all of those environments and well, that's big value that our products bring. >> Yeah. I was at a talk the other day with a senior exec, formerly from Intel, and they thought that it's going to level out at probably 50-50, you know, kind of cloud-based versus on-prem. And that's just going to be the way it is cause it's just some workloads you just can't move. So, exciting stuff, so, what as you... I can't believe we're coming to the end of 2019, which is amazing to me. As you look forward to 2020 and beyond, what are some of your top priorities? >> Yeah, so, one of my top priorities is really, around machine-learning. I think machine-learning is one of these things that, you know, it's really a general-purpose tool. It's like a hammer, you can solve a lot of problems with it. And, you know, besides doing infrastructure and large-scale infrastructure, machine-learning has, you know, always been sort of my second baby. Did a lot of work during grad-school and at Airbnb. And so, we're seeing more and more customers adopt machine-learning to do all kinds of interesting, you know, problems like predictive maintenance in a factory where, you know, every minute of downtime costs a lot of money. But, machine-learning is such a new space, that a lot of the best practices that we know from software engineering and from running software into production, those same things don't always exist in machine-learning. And so, what I am looking at is, you know, what can we take from what we learned running production software, what can we take and move over to machine-learning to help people run these models in production and you know, where can we deploy machine-learning in our products too, internally, to make them smarter and automate them even more. >> That's interesting because the machine-learning and AI, you know, there's kind of the tools and stuff, and then there's the application of the tools. And we're seeing a lot of activity around, you know, people using ML in a specific application to drive better performances. As you just said,-- >> Mm-hmm. >> You could do it internally. >> Do you see an open-source play in machine-learning, in AI? Do you see, you know, kind of open-source algorithms? Do you see, you know, a lot of kind of open-source ecosystem develop around some of this stuff? So, just like I don't have time to learn data science, I won't necessarily have to have my own algorithms. How do you see that,-- >> Yeah. >> You know, kind of open-source meets AI and ML, of all things? >> Yeah. It's a space I think about a lot and what's really great, I think is that we're seeing a lot of the open-source, you know, best-practice that we know from software, actually, move over to machine-learning. I think it's interesting, right? Deep-learning is all the rage right now, everybody wants to do deep-learning, deep-learning networks. The theory behind deep-networks is actually, you know, pretty old. It's from the '70s and 80's. But for a long time, we dint have that much, enough compute-power to really use deep-learning in a meaningful way. We do have that now, but it's still expensive. So, you know, to get cutting edge results on image recognition or other types of ML problems, you need to spend a lot of money on infrastructure. It's tens of thousands or hundreds of thousands of dollars to train a model. So, it's not accessible to everyone. But, the great news is that, much like in software engineering, we can use these open-source libraries and combine them together and build upon them. There is, you know, we have that same kind of composability in machine-learning, using techniques like transfer-learning. And so, you can actually already see some, you know, open-community hubs spinning up, where people publish models that you can just take, they're pre-trained. You can take them and you know, just adjust them to your particular use case. >> Right. >> So, I think a lot of that is translating over. >> And even though it's expensive today, it's not going to be expensive tomorrow, right? >> Mm-hhm. >> I mean, if you look through the world in a lens, with, you know, the price of compute-store networking asymptotically approaching zero in the not-to-distant future and think about how you attack problems that way, that's a very different approach. And sure enough, I mean, some might argue that Moore's Law's done, but kind of the relentless march of Moore's Law types of performance increase it's not done, it's not necessarily just doubling up of transistors anymore >> Right >> So, I think there's huge opportunity to apply these things a lot of different places. >> Yeah, yeah. Absolutely. >> Can be an exciting future. >> Absolutely! (laughs) >> Tobi, congrats on all your successes! A really fun success story, we continue to like watching the ride and thanks for spending the few minutes with us. >> Thank you very much! >> All right. He's Tobi, I'm Jeff, you're watching The Cube, we're at D2 iQ Headquarters downtown in San Francisco. Thanks for watching, we'll catch you next time! (electric chime)

Published Date : Nov 7 2019

SUMMARY :

Brought to you by but the at the end of the day, you actually have to do it So, before we jump into the company and where you are now, to talk about our vision, you know, But you know, when you start And so, you know, what motivates me It's funny because it's analogous to what you guys do, and this rapid move to cloud native, you know, came out of, you know, big software companies, right? fall into the trap of, you know, the key or one of the drivers was that, you know, you know, kind of native startups, What are you seeing, how are you helping them and often times that means, you know, from the security point of view, you know, That you don't even know are on your network. Most of the data was generated, you know, at probably 50-50, you know, And so, what I am looking at is, you know, And we're seeing a lot of activity around, you know, Do you see, you know, a lot of kind of that we're seeing a lot of the open-source, you know, with, you know, the price of compute-store networking So, I think there's huge opportunity Yeah, yeah. and thanks for spending the few minutes with us. Thanks for watching, we'll catch you next time!

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc AndreessenPERSON

0.99+

JeffPERSON

0.99+

TobiPERSON

0.99+

Jeff FrickPERSON

0.99+

six yearsQUANTITY

0.99+

Tobi KnaupPERSON

0.99+

AirbnbORGANIZATION

0.99+

2020DATE

0.99+

sixQUANTITY

0.99+

ZuckerbergPERSON

0.99+

IntelORGANIZATION

0.99+

two companiesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

second babyQUANTITY

0.99+

San FranciscoLOCATION

0.99+

tomorrowDATE

0.99+

LinkedInORGANIZATION

0.99+

oneQUANTITY

0.99+

TwitterORGANIZATION

0.99+

first thingQUANTITY

0.99+

seven yearsQUANTITY

0.98+

EDGEORGANIZATION

0.98+

five roundsQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

todayDATE

0.98+

FacebookORGANIZATION

0.98+

D2 iQORGANIZATION

0.98+

The CubeTITLE

0.97+

three monthsQUANTITY

0.97+

200 plus million dollarsQUANTITY

0.97+

hundredsQUANTITY

0.97+

Silicon ValleyLOCATION

0.96+

firstQUANTITY

0.96+

a thousandQUANTITY

0.95+

'70sDATE

0.95+

D2iQ Journey to Cloud NativeTITLE

0.95+

50-50QUANTITY

0.94+

end of 2019DATE

0.94+

80'sDATE

0.94+

single pointQUANTITY

0.92+

hundreds of thousands of dollarsQUANTITY

0.92+

fourQUANTITY

0.92+

first thingsQUANTITY

0.9+

D2 iQ HeadquartersLOCATION

0.89+

10 thousand internal developersQUANTITY

0.87+

ForescoutORGANIZATION

0.85+

hundreds andQUANTITY

0.85+

KubernetesPERSON

0.84+

single commandQUANTITY

0.83+

years agoDATE

0.82+

Moore's LawTITLE

0.79+

theCUBEORGANIZATION

0.79+

six monthsQUANTITY

0.79+

2019DATE

0.78+

twoQUANTITY

0.76+

zeroQUANTITY

0.76+

5GORGANIZATION

0.75+

devicesQUANTITY

0.72+

Moore'sTITLE

0.66+

KafkaTITLE

0.64+

couple of yearsQUANTITY

0.62+

Carl Holzhauer, Shumaker, Loop & Kendrick, LLP | Microsoft Ignite 2019


 

>>Live from Orlando, Florida. It's the cube covering Microsoft ignite. Brought to you by Cohesity. >>Good morning cube land and welcome back to the cubes live coverage of Microsoft ignite here in Orlando, Florida. I'm your host, Rebecca Knight, along with my cohost Stu Miniman. We are joined by Carl holes, our, he is the supervisor of infrastructure Shumaker loop and Kendrick based in Toledo, Ohio. Thank you so much for coming on the show tomorrow. So tell our viewers first of all a little bit about uh, she make her loop. So Schulich >>is a top 200 law firm in the U S we have seven locations across the country, most of the East coast and we serve as anything from litigation to environmental to legal and things like that. >>Okay. So you and you are the supervisor of infrastructure. >>Yeah, my role there is to make sure anything with plugs and switches keeps working, so. >>All right. And so Carl, tell us a little bit about, you said you've got multiple occasions once that span and to the lawyers tell you everything and how it must be. >>The lawyers definitely have a say in the way things work. We have most of the locations in Florida and the Carolinas and two in Ohio. Okay. >>Um, and you know, with the locations, you know, what are some of the business drivers that you're going on? When I talked to most companies, you know, there's the constant change. Is there M and a happening? Is it growth? What are some of the drivers of your business? >>Oh, for sure it's growth. You know, obviously as time goes on there's more and more cases, more and more legal things have to happen. Lawyers love documents, so we have to store documents, index amend, make sure that they're always available for their use. And then of course, as part of that too, there's, there's legals holds on things, you know, stuff the case that stretches over, you know, five, 10 years. We need to keep that data safe. Yeah. So I would, >>I think that the word compliance is one that you know, all too well. Exactly. Bring us in a little bit to them. Side is, so some of those, you know, what do you have to be concerned about? You know, how many petabytes exabytes of things in years >>I'll have to that. It depends on the kind of case it is and what it involves. Some, some cases, as long as you have the data in some form you're okay. Other cases the data can't change. So we have systems that might be a little older because they, it has to be as it was when we actually had the case come into us. Okay. That's challenging too. So data, when we talked to so many companies, it's, you know, how can I monetize data? How can I do that? Data has to be a slightly different role inside your organization. How, how's that thought of, >> we have to be careful obviously because conflict of interest, you know, we so have to keep data separate in some instances and internally, not everybody can see the same data because there is issues with privacy or hippo or you know, or so on and so forth that they can't see this stuff. So for us, we need to keep it safe more than monetize it. >>So as you said, the lawyers have a, have a big say in how things empower things happen. So how would you describe the approach and mindset of, of Chyulu toward technology and toward cloud-based and new kinds of, to, to store >>and keep data safe? We, our goal is to make sure things are always online. Um, so we kind of tend toward the more, the more tried and true methods of doing things, the bleeding edge doesn't always work for us. So, but we also can't afford to, to lag behind. So we need to find that balance in between somehow to keep things moving, but the same time make sure that things don't go down or offline fraternities. So protecting and backing up your data across a hybrid environment isn't easy. So Ty, and I know you, you are on a panel here at ignite about, uh, backup disasters and how to avoid them. So I'd love to have you talk a little bit about, about how you think about this and then, and how you interview vendors, vendors and decide what's the right solution for your show. >> Every different, I guess a practice inside of law firm has different ways of getting data. They like their programs this way or that way and they're all different. So the hard part for us is how to keep that data always available to them in different systems. So whatever we do has to encompass making sure these all, all these things work, you know, kind of as, as one. So we've used Cohesity to do backups, we've used Xero to do dr mixer always online. >>Okay. And how long have you been using those solutions? I, how did you reach the kind of those decisions? >>Those were brought in? Just as I joined the firm about a year and a half ago. Um, our vendor who we're using is very tight with Cohesity and Xero and said that might be a good idea. And the more I use them, the more I agree with that. And they're all good. >>So you're saying it's your, your CI, your channel partner channel that does, that. They're trusted, they provide your gear, advise you on the software. Because let's be honest, as time goes on, you can't know everything. So you need to somebody that you can trust to bring in and say, Hey, do it this way. Well, yeah, Carl, I mean, I don't know if you caught the day one keynote, but even those of us that watch the industry in DOE, it's, there's no way any of us could keep up at though. So that, that, that's really important. How do you make sure that you know that that's a trusted advisor? You know, what's, what's the kind of the give and take between them? >>I think a lot of that comes down to a gut feeling, right? I you, if you feel slimy when you meet somebody, you know, they don't have your best interests in mind and that's what you want. Not my best interest, but the interest of the firm and of the company. So once you have established those guidelines, you usually can trust what they're saying. And I guess every time you meet them too, you have to reevaluate is this still a good fit? >> So when you comes to backup and recovery, I'd love to hear more about this panel and how you and your colleagues came to conclusions about how here's some, here's some big ones and here's how you can avoid them. So I think for us it was just what worked and what didn't work. You know, we all, all three of us use this stuff day to day. So we found the pitfalls, we found what you should and shouldn't do. And when we share that with the, with the community, we get some good feedback on that. >>So Carl, a year and a half there, any, any specific advice that you'd share? People as to what you've learned? Say I hired pitfalls in there as you know, was it a configuration issue or something went wrong because we know the best intentions and best products out there, so you know, things can get in the way. Yeah, >>definitely. We've learned to keep support clothes. I mean, they're awesome. They know their stuff. There's some things we've had issues with that I wanted to do that it wasn't a good fit or we've ran into some bugs here and there, but they're really responsive and they'll put all the alpha specialists for you and weeks, you know, and things just end up working. >> Alright, so here at the conference, what are some of the conversations that you're having because you are in the legal industry and so not necessarily community college communicating all day with people in the high tech industry. So bring us inside a little, tell us about the conversations you're having, interesting people that you're meeting, things that are sparking your interest. >> It's neat because I've met some people through the panel I was on yesterday and they're asking questions that don't even title legal. You know, they have the same access as we do, but they are just either apply to manufacturing or applied to natural gas or whatever happens to be. Um, and then when you know, meetings from the vendors here, it's interesting too, you know, I'm an illegal mindset now and they say, Hey, what about this? And you go, Oh, that's some game changer. And you know, and all of a sudden you can apply it to your field. >>More sense. Yeah. How about this your first time attending Microsoft ignite? Give us a little bit your impressions, you know, uh, the, the, the good, the bad. And the interesting is it's really >>big. I walked through here Sunday night when when nobody was here. It's like, Oh, this isn't too bad. And then I think I walked 10 miles the first day getting places and it's usually pretty well laid out and unless there's beer or food and everybody kind of goes to it and it's hard to move around. But other than that I think it's pretty cool. So what are the kinds of things you're going to take back? As you said, you are sometimes talking to people who are in a completely different industry with you and they are saying things that spark your interest and spark new ideas. What are the kinds of things you're going to take back to shoe loop when you arrive back in total Toledo, we're trying to look at all these new buzzwords like on new, but like blockchain or AI and how they can help us do our jobs better and serve their attorneys better. Um, is there something that I haven't thought of that blockchain can, can do this >>for us and better than we're doing it now now. So Carl, one of the things we've noticed there, there's a real growth in some of the developer content here as an infrastructure person. And I'm curious your view on that, that that side of the world. >>That is not my strong suit. Obviously I came from a world where that was a big deal and I could learn some things. But as far as my background goes and learning about it, it's kinda over my head. Um, you know, I can get it behind this stuff, talk to automate processes and make things, you know better. But as far as the dev side, I'm kind of going, Hey, no, I know if I get this, but, but there is such a push here for citizens and for citizen developers and to sort of democratize this and say even you can do this, which is awesome and in a way because the more eyes have on something, the better they go. You know I can even if I don't understand something, I can ask the question, Hey, why is this work this way? You go, Oh it shouldn't work this way. >>Let's fix this and and make things better. You know. Anything more about kinda your firm's relationship with Microsoft? So many announcements here. Not, no, not sure if teams has used a in your environment. We are using Skype right now but we have way pushed to go to two teams. So that's going to be a big, big push for us in queue for this year and digging next year and then we're looking at moving to Azure at some point. Getting our stuff up there and making you know to be most effective, faster, better. How do you stay up to date with all of these new announcements and not just here at Microsoft, but even in the larger technology community. You can't stop learning. You can't stop reading. You know? You look at the like the slash dots of the world and you just keep looking at things and some things may make sense. >>Some things I'm like, Oh, that's kind of cool. I'll read it later. All of a sudden it goes, Oh, that's a big idea and we should look at this some more. But again, it's having those trusted people that you know or colleagues that say, Hey, I saw this. I saw that. Take a look at that suit. You think so? I know in your, in your off time, you are an officiant of a number of different sports. I'm curious to hear how you bring what you do as an officiant into your job at shoe loop and the similarities. The differences. In my help desk days, it was a lot easier because I could take the, the end user ratings a lot easier because I will hold nothing personal, but it's neat too. I mean, when you're an official, you have, there's a, there's a way things work. There's a, there's a set of rules you have to follow and, and it, and even anything that's technology based there, it's all logical progression of things. This is the way things work and not they blinders as much, but as much as you just follow the process, which makes this audience here. Great. Well thank you so much Carla, for having me. It was great having you on the show. Thank you guys. I'm Rebecca Knight for Stu Miniman. Stay tuned for more of the cubes live coverage. Microsoft ignite.

Published Date : Nov 6 2019

SUMMARY :

Brought to you by Cohesity. Thank you so much for coming on the show tomorrow. most of the East coast and we serve as anything from litigation to environmental And so Carl, tell us a little bit about, you said you've got multiple occasions once that span We have most of the locations in Florida and Um, and you know, with the locations, you know, what are some of the business drivers that you're going And then of course, as part of that too, there's, there's legals holds on things, you know, stuff the case that stretches over, Side is, so some of those, you know, what do you have to be concerned about? when we talked to so many companies, it's, you know, how can I monetize data? we have to be careful obviously because conflict of interest, you know, we so have to keep data separate in some So as you said, the lawyers have a, have a big say in how things empower things happen. So I'd love to have you talk a little bit about, about how you think about this and then, all these things work, you know, kind of as, as one. I, how did you reach the kind of those And the more I use them, the more I agree with that. So you need to somebody that you can trust to bring in and say, you know, they don't have your best interests in mind and that's what you want. and recovery, I'd love to hear more about this panel and how you and your colleagues and best products out there, so you know, things can get in the way. specialists for you and weeks, you know, and things just end up working. so here at the conference, what are some of the conversations that you're having because you are in And you know, and all of a sudden you can apply it to your field. And the interesting is As you said, you are sometimes talking to people who are in a completely different industry with you So Carl, one of the things we've noticed there, Um, you know, I can get it behind this stuff, talk to automate processes and make dots of the world and you just keep looking at things and some things may make sense. I'm curious to hear how you bring

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

MicrosoftORGANIZATION

0.99+

FloridaLOCATION

0.99+

OhioLOCATION

0.99+

CarlPERSON

0.99+

10 milesQUANTITY

0.99+

CarlaPERSON

0.99+

fiveQUANTITY

0.99+

twoQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

XeroORGANIZATION

0.99+

next yearDATE

0.99+

Orlando, FloridaLOCATION

0.99+

Sunday nightDATE

0.99+

CarolinasLOCATION

0.99+

yesterdayDATE

0.99+

Toledo, OhioLOCATION

0.99+

Stu MinimanPERSON

0.99+

two teamsQUANTITY

0.99+

SkypeORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

tomorrowDATE

0.99+

Shumaker loopORGANIZATION

0.99+

10 yearsQUANTITY

0.98+

U SLOCATION

0.98+

Carl holesPERSON

0.98+

seven locationsQUANTITY

0.98+

this yearDATE

0.98+

oneQUANTITY

0.97+

a year and a halfQUANTITY

0.97+

first timeQUANTITY

0.96+

first dayQUANTITY

0.96+

threeQUANTITY

0.92+

SchulichORGANIZATION

0.91+

Shumaker, Loop & Kendrick, LLPORGANIZATION

0.89+

a year and a half agoDATE

0.89+

aboutDATE

0.84+

DOEORGANIZATION

0.84+

ChyuluORGANIZATION

0.83+

TyPERSON

0.7+

200 lawQUANTITY

0.67+

dayQUANTITY

0.63+

firstQUANTITY

0.56+

Carl HolzhauerPERSON

0.5+

KendrickORGANIZATION

0.46+

AzureORGANIZATION

0.45+

Ignite 2019TITLE

0.38+

ToledoPERSON

0.36+

Charlie Crocker, Zonehaven & Tim Woodbury, Splunk | Splunk .conf19


 

>>live from Las Vegas. It's the Cube covering Splunk dot com. 19. Brought to You by spunk >>Hey, welcome back, everyone. We're live here in Las Vegas for Splunk dot com. I'm John Ferrier with Q two great guests. Tim would Bury, director of state and local affairs for Splunk and Charlie Crocker, CEO Zone Haven. Very innovative. Start up doing some incredible things with Splunk Ventures Financing Summerlee Financing around Really check for good guys. Welcome. Thank you, Charlie. First, explain what you guys are doing real quick because I think this is a great example of what I've been seeing now for two years now. But now, in the past year, renaissance of entrepreneurial activity around mission driven tech for good, where entrepreneurs are using the cloud and sass models and platforms like Splunk to stand up Mission Value Commission >>value. I like the term. Explain what you're doing. So simply put, we're building in evacuation planning and support to. So right now, there are more stronger fires happening. Over the last five years, we've had more than half of California's most destructive wildfires happened just in the last five years. So it's it's mission critical that we figure this out. Now the's fires air. So big goal is really just to get people out of harm's way. And that's a difficult job to figure out at three in the morning with a map on the hood of a pickup truck. So we're building Zillah ways for fire. There's no ways for five ways has got public safety >>people no ways. But the thing is, is that yesterday I was watching on TV and Pacific Palisades, California air drop of water on the canyon, right before house, and I see the people running right. You like running for their lives. There was a serious business. Exactly. You guys are trying to provide system >>we're trying to do. What we've built are a set of zones, the ability for the Fire Department, law enforcement and, oh yes, to work on customizing hyper local evacuation plans, hyperlocal down to the neighborhood level and then we're scaling that statewide. So how do you make sure that this fire department on these three law enforcement groups are coordinated before and how do they have the conversation with community before the event happens? If we can save five minutes at the time, the event happens, we're going to save lives. >>So this is really about making efficiency around the first responders on the scene from leveraging data which maps or their >>maps, data dynamic data, telemetry, data where the fire's gonna go simulations for how the fire could potentially grow. Who needs to get out of harm's way first. What's that gonna do to the traffic and Road Network? Talk >>about the original story. Then we get to this plug involvement, the origination stories you're sitting around. You're talking to friends in the business. >>So we have colleagues and friends that are in the business and many of them, you know, from the Silicon Valley these guys are innovative leaders in in fire on. They've got a lot of really good ideas on how to make their jobs better. >>They >>don't have a tech team, they don't have a tech arms. So we literally said, Look, we'll come in and we'll make work what your vision is, and that started to expand on. Now we started to move from these smaller jurisdictions. Too much larger >>jurisdictions. Data is driving the future. That's a tagline I'm reading. I've seen the new branding by the way, the new brains very strong, by the way. Love it, Thank you. So this is a good example of data driven value constituency fire professionals. That's all they think about is how to make people save put, get in harm's way to try to solve the fires. They don't have tech teams. You don't have a data center they don't have, like with boot up a consulting. To come into a waterfall of a meeting by that sign is just, Yeah, you can't just do that. They can't stand up. How did you guys get involved in this? It's data driven, obviously. What's the story? >>Way, Say, dated everything. We really mean it. It's really you know, it's a personal story for me. I am on the government affairs team here. It's flowing, so I manage relationships with governors and mayors, and these are the issues that they care about right When the city's burning down, the mayor cares about that. The governor, This is you know, one of the governor in California's major initiatives is trying to find solutions on wildfires. I met Charlie, my hometown. Orinda, California Aren't Fire Chief in that town was one of sort of the outside advisers working with Charlie on this idea. And we're and I met him at a house party where the fire chief was telling me to trim my trees back and shrubs back. And then I was at a conference three days later that same fire chief, Dave Liniger. I was on a panel with folks from a super computer lab and NASA and M i t was like, you know, my fire chiefs, Still the smartest guy in that panel. I gotta meet this guy. A few weeks later, we were literally in the field doing these concepts with sensors and data. Super savvy folks. Some of the other folks from Cal Fire there. Dr. Cox was with us today. Here on. You know, we've just been collaborating the whole time and seeing you know that that Splunk and really put some fire power power behind these guys and we see like, Look, they've got the trust of these customers and we need to make sure this idea happens. It's a great idea, and it's gonna save lives. >>It's crazy way did a test burn where we run a small burn on a day where we're very confident it won't grow. Put the sensors out right next to a school in Arena. It was his kid's school. >>Yeah, I have a kindergartner that goes to that school, so >>it's slightly personal for you. I could >>be I could be said that this is just me protecting my own. But it is something that I think will save lives around the world. >>First of all, this, there is huge human safety issues on both sides. The ire safety put in harm's way. Those professionals go out all day long, putting their lives at risk to save human, the other human beings. And so that's critical. But if you look at California, this other impact cost impact rolling blackouts because they can't instrument the lines properly just because of the red red flag warnings off wind. I mean, I could be disrupted businesses, disruptive safety. So so PG and e's not doing us any favors either. Sound so easy. Just fix it. >>It sounds easy, but I think with be Jeannie, it's interesting way do need to prevent wildfires and really any way that we can. But like you said, if we could bring more data to the problem maybe we can have the blackouts be smaller. You know, they don't have to be a CZ big. >>There's certainly no lack of motivation to find solutions to this issue. There are lives on the line. There's billions of dollars on the line that these types of solutions own haven a part of part of what is going to fix it. But there are many very large stake holders that need these solutions very quickly. >>Well, you know the doers out there making it happen of the people in the front lines on the people they're trying to protect our cities, our citizens on this sounds like a great example of tech for good, where you guys are doing an entrepreneurial efforts with people who need it. There's a business, miles, not free non profit. You're gonna get paid. It's a business model behind. >>There is a business bottle behind it, and I think the value proposition is only beginning to be understood, right? There were so many missions in so many different ways. Wildfires are massive. You can come at him from satellite, come at him from on the ground. We're working with the people on the ground who need to get people out of harm's way. We're focusing on making their jobs easier, so they're safer and they get people out >>more quickly. You guys in the tech business, we always talking. We go. These events were re platforming our business. A digital transformation. You know all the buzz, right? Right. This is actually an acute example of what I would call re platforming life because you're taking a really life example. Fire California Fire forest There, out in the trees trimming thing is all real life. This isn't like, you know, some digital website. >>We certainly I mean, I've been in the data business for more more time than I can remember, and we've got the tools, tools, like Splunk tools. Like Amazon Web service is we've got the data. There's satellites all over. We've got smart people in machine learning way. Need to start applying that to do good, right? It exists. We do not need to go invent new technology right now in order to solve this problem, >>Charlie, really inspired by your position and your your posture. I want you to spend more time talking about that feature because you're an entrepreneur. You're not just detect for good social justice Warrior, You're an experienced data entrepreneur, applying it to a social good project. It's not like I'm gonna change the world, you actually doing it. There's a path for other entrepreneurs to make money to do good things fast. Talk about the journey because with cloud computing, it's not like a 10 year horizon. There's a path for immediate benefit. I >>mean the pat. So I mean in terms of creating a profitable venture. We're a young company way feel like we have a good, good direction way feel like there is a market for this way. Also feel like there's public private partnerships Here is well, I think that we can take the same solutions that we have here and apply them to campuses. You could apply it to, you know, a biotech campus, a university campus. You could apply it to a military base, right? There's insurance could be involved in this because insurance risk people are losing insurance in their homes as well. So you know, there's a lot of different angles that we can take for this exact same. Say >>that what's the expression dated to everything. Yet this is an example of taking data on applying it to some use case. >>A very specific cool evacuation neighborhood evacuation and really building the community fabric so that people take care of each other and can get out together. Where are the vulnerable populations in that zone? Who's gonna go respond to those If if the fire department can't come in, right, How are we gonna get those people out? >>I love the vision. You guys were also for putting some cash in their spunk. Ventures. Congratulations. Talk about the product. Where you guys at using Splunk. You putting data sensors out there, You leveraging existing data. Both take us through some of the nuts and bolts of what's going on the >>price. So part of it is building out some data sets. So there are some data sets that don't exist. But the government and the counties and the private sector have built out a huge corpus of data around where the buildings are, where the people are, where the cell phones are, where the traffic is. So we're able to leverage that information as we have it today. Technology. We're using the Amazon stack. It's easy for us to spin up databases. It's easy for us to build out and expand, as as we grow online with Splunk were able to have a place for all this real time data toe land. And for us to be able to build a P I's to pull it out very >>simply having a conversation with Teresa Carlson, who runs Amazon Web sites. Public sector variety of these things of projects are popping up. Check for good. That's for profit. It helps people and the whole idea of time to value with cloud and flunks. Platform of leveraging diverse data making Data Realtor whether it's real time, time, serious data or using a fabric surge or accelerated processing capabilities is that you can get the value quicker. So if you got an idea for you to wait two years of just e whether it was it a hit or not, you can illiterate now. So this idea of the start of agile startup is now being applied to these public sadly like things. So it's everything >>you spot on, and you know the unique element of Splunk with some of these data sources way don't necessarily know which ones are gonna be the right ones. We're talking about satellite data, sensor data. Some of this on. Part of it is we're building an outdoor smoke alarm, right? No one's ever done that before. So, you know, with court nature of Splunk technology being able to easily, you know, try to see if that is the right data source is critical, giving people the man with two go try to make this happen. >>You guys are a great example of zone haven, Charlie, You and your team of what I call a reconfiguration of the value creation of startups. You don't need to have full stack develop. You got half the stack and Amazon domain expertise in the inertial properties flipped around from being software on this intellectual mode to domain specific intellectual property. You took the idea of firefighters and you're implementing their idea into your domain expertise using scale and data to create a viable, busy >>other thing. I want to throw in there, though, and this is something that people often forget a big part of our investments going to be in user experience. This thing needs to be usable by the masses. It cannot be a complicated solution. >>You X is the new software data is the new code, but anyone can start a company if they have an innovative idea. You don't have to have a unique algorithm that could be a use case to solve a problem. >>If you have a very Calgary them, you can put it on Splunk Platform or Amazons platform and scale it. >>This is going to change, I think, the economic landscape of what I call tech for good now. But it's entrepreneurship redefined. You guys are great working example of that. Congratulations on the vision. Thank you to you and your team. Thanks for coming on the Q. Thanks for sharing. It's great to be here. It's a great example of what's going on with data for everything. Of course, this acute were cute for everything. We go to all the events of smart people and get the data and share that with you here in Las Vegas for dot com. 10 years of conference our seventh year, I'm John Ferrier. We'll be back with more coverage after this short break

Published Date : Oct 22 2019

SUMMARY :

It's the Cube covering But now, in the past year, So big goal is really just to get people out of harm's way. But the thing is, is that yesterday I was watching on TV and Pacific Palisades, So how do you make sure that this fire department on these three law enforcement for how the fire could potentially grow. about the original story. So we have colleagues and friends that are in the business and many of them, you know, from the Silicon Valley these guys So we literally said, Look, we'll come in and we'll make work the new brains very strong, by the way. I am on the government affairs team here. Put the sensors out right next to a school in Arena. I could be I could be said that this is just me protecting my own. instrument the lines properly just because of the red red flag warnings off wind. You know, they don't have to be a CZ big. There's billions of dollars on the line that these types of solutions own haven our citizens on this sounds like a great example of tech for good, where you guys are doing You can come at him from satellite, come at him from on the ground. You guys in the tech business, we always talking. We certainly I mean, I've been in the data business for more more time than I can remember, Talk about the journey because with cloud computing, You could apply it to a military base, right? on applying it to some use case. really building the community fabric so that people take care of I love the vision. It's easy for us to build out and expand, as as we grow online with Splunk were idea of time to value with cloud and flunks. being able to easily, you know, try to see if that is the right data source is critical, You got half the stack and Amazon domain expertise in the inertial properties flipped around This thing needs to be usable by the masses. You X is the new software data is the new code, but anyone the data and share that with you here in Las Vegas for dot com.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave LinigerPERSON

0.99+

Teresa CarlsonPERSON

0.99+

John FerrierPERSON

0.99+

CharliePERSON

0.99+

Charlie CrockerPERSON

0.99+

five minutesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Las VegasLOCATION

0.99+

SplunkORGANIZATION

0.99+

two yearsQUANTITY

0.99+

NASAORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

10 yearsQUANTITY

0.99+

seventh yearQUANTITY

0.99+

FirstQUANTITY

0.99+

yesterdayDATE

0.99+

Tim WoodburyPERSON

0.99+

three days laterDATE

0.99+

10 yearQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

both sidesQUANTITY

0.99+

Cal FireORGANIZATION

0.99+

CoxPERSON

0.98+

Zone HavenORGANIZATION

0.98+

billions of dollarsQUANTITY

0.98+

BothQUANTITY

0.98+

JeanniePERSON

0.98+

todayDATE

0.97+

oneQUANTITY

0.97+

twoQUANTITY

0.97+

VenturesORGANIZATION

0.96+

Pacific PalisadesLOCATION

0.96+

three law enforcement groupsQUANTITY

0.96+

Splunk Ventures Financing Summerlee FinancingORGANIZATION

0.96+

Fire DepartmentORGANIZATION

0.95+

Tim would BuryPERSON

0.95+

PGORGANIZATION

0.95+

first respondersQUANTITY

0.95+

CalgaryORGANIZATION

0.94+

Dr.PERSON

0.92+

A few weeks laterDATE

0.91+

more than halfQUANTITY

0.9+

five waysQUANTITY

0.89+

last five yearsDATE

0.88+

two great guestsQUANTITY

0.86+

Splunk dot comORGANIZATION

0.86+

Mission Value CommissionORGANIZATION

0.84+

past yearDATE

0.83+

Orinda, CaliforniaLOCATION

0.78+

halfQUANTITY

0.74+

Splunk .conf19OTHER

0.66+

CEOPERSON

0.65+

ZonehavenORGANIZATION

0.63+

hat townORGANIZATION

0.62+

threeQUANTITY

0.59+

fireORGANIZATION

0.59+

agileTITLE

0.54+

zone havenORGANIZATION

0.46+

lunk PlatformORGANIZATION

0.43+

ZillahPERSON

0.36+

Sazzala Reddy, Datrium | CUBEConversation, September, 2019


 

(upbeat music) >> Announcer: From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hi and welcome to theCUBE Studios for another CUBE Conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Any business that aspires to be a digital business has to invest in multiple new classes of capabilities required to ensure that their business operates as they're promising to their customers. Now, we've identified a number of these, but one of the things we think is especially important here in theCUBE is data protection, data assurance. If your data is going to be a differentiating asset within your business, you have to take steps to protect it and make sure that it's where it needs to be, when it needs to be there and in the form it's required. Now, a lot of companies talk about data protection, but they kind of diminish it down to just backup. Let's just back up the data, back up the volume. But increasingly, enterprises are recognizing that there's a continuum of services that are required to do a good job of taking care of your data, including disaster recovery. So, what we're going to talk about today is one of the differences between backup and restore, and disaster recovery and why disaster recovery is becoming such an important element of any meaningful and rational digital business strategy. Now, to have that conversation, today we're here with Sazzala Reddy who's the CTO at Datrium. Sazzala, welcome back to theCUBE. >> Happy to be here, Peter. >> So, before we go on this question of disaster recovery and why it's so important, let's start with a quick update on Datrium. Where's Datrium today? You've been through a lot of changes this year. >> Yes, right. We kind of have built a bunch of services as a platform. It will include primary storage, back-up, disaster orchestration and encryption mobility. So that last piece of that puzzle was a DR orchestration, we kind of finished that a few months ago, and that's the update, and also now what we're offering concretely is DR services to the Cloud with the VMware Cloud on Amazon. It is transformational, and people really are adopting it quite heavily with us, because it simplifies, that what you just said about the business continuity, and it gives them a chance to shut down the second data center and leverage the Cloud in a very cost-effective way, to have that option for them. >> So, let's talk about that, because when you think about the Cloud, typically you think about, especially as you start to bring together the Cyber Cloud notion of an on-premise versus a Cloud orientation, you think in terms of an on-premise set of resources and you think in terms of effectively mirroring those resources in the Cloud, and a lot of people have pointed out that that can be an extremely expensive way of doing things. So, historically we had a site, we had a disaster recovery site, maybe we even had a third site, and we had to replicate hardware, we had to replicate networking, we had to replicate software and often a sizeable percentage of staff across all those services, so we've been able to do it more effectively by having the Cloud be the target, but still, having to reserve all that CPU, all that network, seemed like an extremely expensive way of doing things if you only need it, when you need it, and ideally, it's not often. >> That's correct, so, Cloud offers us a new way of doing elastic, on-demand pricing, especially for disaster recovery, it is really useful to think about it that way. In a data center to data center DR like you mentioned, you have to buy all the different products for managing your data, you'll buy primary storage, back-up and de-orchestration, all these different pieces. Then you replicate the same thing somewhere else, all these pieces are kind of just complicated. It's called Murphy's Law, you know, imagine that when there's a disaster, everybody's watching you and you're trying to figure out how this is going to work for you, that's when the challenges are, and there's the danger is that until now, disaster recovery has mostly been a disaster. It's never really worked for anybody. So, what Cloud offers you is an opportunity to simplify that and basically get your disaster recovery to be fail proof. >> Well, so, we have the ongoing expense that we're now ameliorating, we're getting rid of, because we are not forcing anyone to reserve all those resources. >> Yeah. >> But one of the biggest problems in disaster recovery has always been, as you said, it's been a disaster. The actual people processes associated with doing or recovering from a disaster in business continuity sense often fails. So, how does doing it in the Cloud, does it mean we can now do more automation in the Cloud from a disaster recovery stand-point? Tell us a little bit about that. >> There are multiple things, not just that the Cloud offers simplicity in that way, you do have to imagine how are you going to build software to help the customer on their journey. The things, like you mentioned, three things people do in a disaster planning kind of thing, one is that they have to do planning, make all these notes, keep it down somewhere and things change. The moment you make these plans, they're broken because somebody did something else. And second thing is they have to do testing, which is time consuming and they're not sure it's going to work for them, and finally when there's a disaster there's panic, everybody's afraid of it. So, to solve that problem, you need to imagine a new software stack, running in the Cloud, in the most cost-efficient way so you can store your data, you can have all these back-ups there in a steady state and not paying very much. And its three costs are pretty low, if you do dedupe on that it's even lower, so that really brings down the cost of steady-state behavior, but then, when you push the button, you, we can bring up VMware servers on the Amazon Cloud on-demand. So you only pay for the VMware server's computer services when you really need them. And when you don't need them anywhere, you fix your data center, you push a button, you bring all the data back and shut down the VMware servers. So, it's like paying for insurance, after you have an accident. That changes the game. The cost efficiencies of doing DR, it suddenly becomes affordable for everybody, and you can shut down a second data center, cut down the amount of work you have to do, and it gives you an opportunity to actually now have a chance to get that fail proof-ness and actually know it's going to work for you or not going to work for you. >> But you're shutting down the other data center, but you're also not recreating in the Cloud, right? >> Yeah. >> So, you've got the data stored there, but you're not paying for all the resources that are associated with that, you're only spinning them up-- >> That's correct. >> in VM form, when there's actually a problem. But I also want to push this a little bit, it suggests also that if you practice, you said test, I'll use the word practice-- >> I did say that. >> As one of the things you need to do. You need to practice your DR. Presumably if you have more of that automated as part of this cloud experience, then pushing that button, certainly there's going to be some human tasks to be performed, but it increases the likelihood that the recovery process in the business continuity sense is more successfully accomplished, is that right? >> Yeah, correct, there are two things in this DR, one is that, do you know it's going to work for you when you actually have a disaster. That's why you think of doing testing, or the, what did you call it, planning-- >> Practice. >> Practice once in a while. The challenge with that is that why even to practice? Like, it takes time and energy for you to do that. You can do it, no problem, but how can we, with software, transform that in such a way that you get notified when actually something is going to go wrong for you. Because we own primary, back-up and DR, all the three legs of the stool in terms of how the DR should be working, we check continuous compliance checks every half an hour so that we can detect if something is going wrong, you have changed some plans, or you have added some new things, or networking is bad, whatever, we will tell you right away, pro-actively, in half an hour, that hey, there's a problem, should go fix it now. So you don't have to like do that much plan, that much of testing continuously anymore, because we are telling you right now there's a problem. That itself is such a game changer, in the sense that it's pro-active, versus being reactive when you're doing something. >> Yeah, it dramatically increases the likelihood that the actual recovery process itself is successful. >> Sazzala: Yes, right. >> Where if you have a bunch of humans doing it, could be more challenging -- >> Sazzala: More fragile. >> And so, as you said, a lot of the scripts, a lot of that automation is now in the solution and also pro-actively, so if something is no longer in compliance, it does not fit the scheme and the model that you've established within the overall DR framework, then you can alert the business that something is no longer compliance or is out of bounds, fix it so that it stays within the overall DR framework, have I got that right? >> Yes, correct, and you can only do this if you own all the pieces, otherwise, again, it's back to the Murphy's Law, you're testing. So every customer is testing DR in different event typologies, everybody's different, right? So then a customer is not the tester of all these pieces fitting together, and different combinations and permutations. Because we have all the three pieces, we are the ones testing it all the time and everybody testing the same thing, so it's the same software running everywhere that makes the probability of success much higher. >> So it's a great story, Sazzala, but where are you? Where is Datrium today in terms of having these conversations with customers, enacting this, turning this into solutions, changing the way that your customers are doing business? >> Right, we have simplified by converging a lot of services into one platform. That itself is a big deal for a lot of customers, nobody wants to manage stuff anymore, they don't have time and patience. So, we give this platform called DVX on-prem, it runs VMware RCLI, it's super efficient. But the next thing, what we're offering today, which is actually very attractive to our customers is that we give them a path to use the Cloud as a DR site without having to pay the cost of it and also without having to worry about it working for them or not working for them. The demos are super simple to operate because once it all works together, there's no complexity anymore, it's all kind of gone away. >> And, there are a lot of companies, as they mention upfront, that are talking about back-up and restore-- >> Yeah. >> As an approximate to this, but it seems like you've taken it a step further. >> Yeah, so, having been in the business for a while, back-up, yes, back-up can live in the Cloud, you can have long-term back-ups, whatever, but remember that back-up is not DR. If you wanted to have a DR, what DR means is that you're recovering from it, if you have back-up only-- >> Back up's a tier. >> Back-up is a tier. Back-up is that, you have to do rehydration. There's two problems with that. Firstly, rehydration will take you two days, everybody's watching you while the data center is down and businesses wants to be up and running, two days to recover, maybe 22 days. I recently was with a customer, they have a petabyte of data, takes 22 days to do recovery of the data. That's like, okay I don't know what business -- >> 22 days? >> 22 days. And then another 100 days to bring the data back. So that's the problem with back-up as a topic itself. And secondly, they're converting, a lot of those back-up vendors are converting VMs into Amazon VMs, nothing wrong with Amazon, it's just that, suddenly in a disaster, you're used to all your VCenter, you're used to your VMware environment, and now you're learning some new platform? It's going to re-factor your VMs into something else. That is a different disaster waiting to happen for you. >> Well, to the point, you don't want disaster recovery in three years when you figure it all out, you want disaster recovery now-- >> Now. >> With what you have now. >> That's correct, that's exactly right. So those conversions of VMs leads to a path of, it's a one-way migration, there's no path out of that, it's like Hotel California, you're getting in, not coming out. It may be good for Amazon, but the customers want to solve a problem, which is a DR problem. So by working with VM via Cloud, they have been very friendly with us, we're super good partners with them and they've enabled us access to some of the things there to enable us to be able to work with them, use their APIs and launch VMware servers on-demand. That to me, is a game changer, and that's why it's such a highly interesting topic for a lot of customers. We see a lot of success with it, we're leading with it now, a lot of people just dying to get away from this DR problem, and have business continuity for their business, and what we're giving them is the simplicity of one product, one bill, and one support call. You can call us for anything, including Amazon, VMware and Datrium, all the pieces and we'll answer all the questions. >> Now I really like the idea, and you pay for it only as, or after the disaster has been recovered from. >> It's like paying for insurance after the-- >> I like that a lot. All right, Sazzala Reddy, CTO of Datrium, once again thanks for being on theCUBE. >> Oh, thank you very much for having me. >> And thank you for joining us for another CUBE Conversation. I'm Peter Burris, see you next time. (lively brass band music)

Published Date : Sep 25 2019

SUMMARY :

in the heart of Silicon Valley, but one of the things we think and why it's so important, because it simplifies, that what you just said of an on-premise set of resources and you think in terms In a data center to data center DR like you mentioned, because we are not forcing anyone to reserve has always been, as you said, it's been a disaster. and actually know it's going to work for you it suggests also that if you practice, you said test, As one of the things you need to do. one is that, do you know it's going to work for you So you don't have to like do that much plan, that the actual recovery process itself is successful. Yes, correct, and you can only do this is that we give them a path to use the Cloud As an approximate to this, but it seems like you can have long-term back-ups, whatever, Back-up is that, you have to do rehydration. So that's the problem with back-up as a topic itself. So those conversions of VMs leads to a path of, VMware and Datrium, all the pieces Now I really like the idea, and you pay for it only as, I like that a lot. And thank you for joining us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Sazzala ReddyPERSON

0.99+

two daysQUANTITY

0.99+

PeterPERSON

0.99+

September, 2019DATE

0.99+

Silicon ValleyLOCATION

0.99+

SazzalaPERSON

0.99+

DatriumORGANIZATION

0.99+

22 daysQUANTITY

0.99+

100 daysQUANTITY

0.99+

VMwareORGANIZATION

0.99+

two problemsQUANTITY

0.99+

two thingsQUANTITY

0.99+

one platformQUANTITY

0.99+

one billQUANTITY

0.99+

one productQUANTITY

0.99+

three piecesQUANTITY

0.99+

three yearsQUANTITY

0.99+

Murphy's LawTITLE

0.99+

half an hourQUANTITY

0.99+

oneQUANTITY

0.98+

three costsQUANTITY

0.98+

SazzPERSON

0.98+

second thingQUANTITY

0.98+

CUBE ConversationEVENT

0.98+

this yearDATE

0.98+

todayDATE

0.97+

Hotel CaliforniaORGANIZATION

0.97+

FirstlyQUANTITY

0.96+

second data centerQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

third siteQUANTITY

0.95+

theCUBE StudiosORGANIZATION

0.95+

second dataQUANTITY

0.94+

VMware CloudTITLE

0.93+

one support callQUANTITY

0.93+

few months agoDATE

0.93+

one-wayQUANTITY

0.9+

Palo Alto, CaliforniaLOCATION

0.88+

threeQUANTITY

0.87+

three legsQUANTITY

0.87+

CTOPERSON

0.82+

CloudTITLE

0.78+

CUBEConversationEVENT

0.78+

DRTITLE

0.67+

secondlyQUANTITY

0.67+

VMware RCLITITLE

0.65+

petabyteQUANTITY

0.63+

DVXTITLE

0.62+

of servicesQUANTITY

0.59+

customersQUANTITY

0.5+

Prasad Sankaran & Larry Socher, Accenture Technology | Accenture Cloud Innovation Day


 

>> Hey, welcome back. Your body, Jefe Rick here from the Cube were high atop San Francisco in the century innovation hub. It's in the middle of the Salesforce Tower. It's a beautiful facility. They think you had it. The grand opening about six months ago. We're here for the grand opening. Very cool space. I got maker studios. They've got all kinds of crazy stuff going on. But we're here today to talk about Cloud in this continuing evolution about cloud in the enterprise and hybrid cloud and multi cloud in Public Cloud and Private Cloud. And we're really excited to have a couple of guys who really helping customers make this journey, cause it's really tough to do by yourself. CEOs are super busy. There were about security and all kinds of other things, so centers, often a trusted partner. We got two of the leaders from center joining us today's Prasad Sankaran. He's the senior managing director of Intelligent Cloud infrastructure for Center Welcome and Larry Soccer, the global managing director. Intelligent cloud infrastructure offering from central gentlemen. Welcome. I love it. It intelligent cloud. What is an intelligent cloud all about? Got it in your title. It must mean something pretty significant. >> Yeah, I think First of all, thank you for having us, but yeah, absolutely. Everything's around becoming more intelligent around using more automation. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. All of our clients are moving. So it's all about bringing the intelligence not only into infrastructure, but also into cloud generally. And it's all driven by software, >> right? It's just funny to think where we are in this journey. We talked a little bit before we turn the cameras on and there you made an interesting comment when I said, You know, when did this cloud for the Enterprise start? And you took it back to sass based applications, which, >> you know you were sitting in the sales force builder. >> That's true. It isn't just the tallest building in >> everyone's, you know, everyone's got a lot of focus on AWS is rise, etcetera. But the real start was really getting into sass. I mean, I remember we used to do a lot of Siebel deployments for CR M, and we started to pivot to sales, for some were moving from remedy into service now. I mean, we've went through on premise collaboration, email thio 3 65 So So we've actually been at it for quite a while in the particularly the SAS world. And it's only more recently that we started to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. But But this journey started, you know, it was that 78 years ago that we really started. See some scale around it. >> And I think and tell me if you agree, I think really, what? The sales forces of the world and and the service now is of the world office 3 65 kind of broke down some of those initial beers, which are all really about security and security, security, security, Always to hear where now security is actually probably an attributes and loud can brink. >> Absolutely. In fact, I mean, those barriers took years to bring down. I still saw clients where they were forcing salesforce tor service Now to put, you know, instances on prime and I think I think they finally woke up toe. You know, these guys invested ton in their security organizations. You know there's a little of that needle in the haystack. You know, if you breach a data set, you know what you're getting after. But when Europe into sales force, it's a lot harder. And so you know. So I think that security problems have certainly gone away. We still have some compliance, regulatory things, data sovereignty. But I think security and not not that it sold by any means that you know, it's always giving an ongoing problem. But I think they're getting more comfortable with their data being up in the in the public domain, right? Not public. >> And I think it also helped them with their progress towards getting cloud native. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, and you did some level of custom development around it. And now I think that's paved the way for more complex applications and different workloads now going into, you know, the public cloud and the private cloud. But that's the next part of the journey, >> right? So let's back up 1/2 a step, because then, as you said, a bunch of stuff then went into public cloud, right? Everyone's putting in AWS and Google. Um, IBM has got a public how there was a lot more. They're not quite so many as there used to be, Um, but then we ran into a whole new host of issues, right, which is kind of opened up this hybrid cloud. This multi cloud world, which is you just can't put everything into a public clouds. There's certain attributes is that you need to think about and yet from the application point of view before you decide where you deploy that. So I'm just curious. If you can share now, would you guys do with clients? How should they think about applications? How should they think about what to deploy where I think >> I'll start in? The military has a lot of expertise in this area. I think you know, we have to obviously start from an application centric perspective. You go to take a look at you know where your applications have to live water. What are some of the data implications on the applications, or do you have by way of regulatory and compliance issues, or do you have to do as faras performance because certain applications have to be in a high performance environment. Certain other applications don't think a lot of these factors will. Then Dr where these applications need to recite and then what we think in today's world is really accomplish. Complex, um, situation where you have a lot of legacy. But you also have private as well as public cloud. So you approach it from an application perspective. >> Yeah. I mean, if you really take a look at Army, you look at it centers clients, and we were totally focused on up into the market Global 2000 savory. You know how clients typically have application portfolios ranging from 520,000 applications? And really, I mean, if you think about the purpose of cloud or even infrastructure for that, they're there to serve the applications. No one cares if your cloud infrastructure is not performing the absolute. So we start off with an application monetization approach and ultimately looking, you know, you know, with our tech advisory guys coming in, there are intelligent engineering service is to do the cloud native and at mod work our platforms, guys, who do you know everything from sales forward through ASAP. They should drive a strategy on how those applications gonna evolve with its 520,000 and determined hey, and usually using some, like the six orders methodology. And I'm I am I going to retire this Am I going to retain it? And, you know, I'm gonna replace it with sass. Am I gonna re factor in format? And it's ultimately that strategy that's really gonna dictate a multi and, you know, every cloud story. So it's based on the applications data, gravity issues where they gonna reside on their requirements around regulatory, the requirements for performance, etcetera. That will then dictate the cloud strategies. I'm you know, not a big fan of going in there and just doing a multi hybrid cloud strategy without a really good up front application portfolio approach, right? How we gonna modernize that >> it had. And how do you segment? That's a lot of applications. And you know, how do you know the old thing? How do you know that one by that time, how do you help them pray or size where they should be focusing on us? >> So typically what we do is work with our clients to do a full application portfolio analysis, and then we're able to then segment the applications based on, you know, important to the business and some of the factors that both of us mentioned. And once we have that, then we come up with an approach where certain sets of applications he moved to sass certain other applications you move to pass. So you know, you're basically doing the re factoring and the modernization and then certain others you know, you can just, you know, lift and shift. So it's really a combination off both modernization as well as migration. It's a combination off that, but to do that, you have to initially look at the entire set of applications and come up with that approach. >> I'm just curious where within that application assessment, um, where is cost savings? Where is, uh, this is just old. And where is opportunities to innovate faster? Because we know a lot of lot of talk really. Days has cost savings, but what the real advantages is execution speed if you can get it. If >> you could go back through four years and we had there was a lot of CEO discussions around cost savings, I'm not really have seen our clients shift. It costs never goes away, obviously right. But there's a lot greater emphasis now on business agility. You know, howto innovate faster, get getting your capabilities to market faster, to change my customer experience. So So it's really I t is really trying to step up and, you know, enabled the business toe to compete in the marketplace. We're seeing a huge shift in emphasis or focus at least starting with, you know, how'd I get better business agility outta leverage to cloud and cloud native development to get their upper service levels? Actually, we started seeing increase on Hey, you know, these applications need to work. It's actress. So So Obviously, cost still remains a factor, but we seem much more for, you know, much more emphasis on agility, you know, enabling the business on, given the right service levels of right experience to the user, little customers. Big pivot there, >> Okay. And let's get the definitions out because you know a lot of lot of conversation about public clouds, easy private clouds, easy but hybrid cloud and multi cloud and confusion about what those are. How do you guys define him? How do you help your customers think about the definition? Yes, >> I think it's a really good point. So what we're starting to see is there were a lot of different definitions out there. But I think as I talked more clients and our partners, I think we're all starting to, you know, come to ah, you know, the same kind of definition on multi cloud. It's really about using more than one cloud. But hybrid, I think, is a very important concept because hybrid is really all about the placement off the workload or where your application is going to run on. And then again, it goes to all of these points that we talked about data, gravity and performance and other things. Other factors. But it's really all about where do you place the specific look >> if you look at that, so if you think about public, I mean obviously gives us the innovation of the public providers. You look at how fast Amazon comes out with new versions of Lambda etcetera. So that's the innovations there obviously agility. You could spend up environments very quickly, which is, you know, one of the big benefits of it. The consumption, economic models. So that is the number of drivers that are pushing in the direction of public. You know, on the private side, they're still it's quite a few benefits that don't get talked about as much. Um, so you know, if you look at it, um, performance if you think the public world, you know, Although they're scaling up larger T shirts, et cetera, they're still trying to do that for a large array of applications on the private side, you can really Taylor somethingto very high performance characteristics. Whether it's you know, 30 to 64 terabyte Hana, you can get a much more focused precision environment for business. Critical workloads like that article, article rack, the Duke clusters, everything about fraud analysis. So that's a big part of it. Related to that is the data gravity that Prasad just mentioned. You know, if I've got a 64 terabyte Hana database you know, sitting in my private cloud, it may not be that convenient to go and put get that data shared up in red shift or in Google's tensorflow. So So there's some data gravity out. Networks just aren't there. The laden sea of moving that stuff around is a big issue. And then a lot of people of investments in their data centers. I mean, the other piece, that's interesting. His legacy, you know, you know, as we start to look at the world a lot, there's a ton of code still living in, You know, whether it's you, nick system, just IBM mainframes. There's a lot of business value there, and sometimes the business cases aren't aren't necessarily there toe to replace them. Right? And in world of digital, the decoupling where I can start to use micro service is we're seeing a lot of trends. We worked with one hotel to take their reservation system. You know, Rapid and Micro Service is, um, we then didn't you know, open shift couch base, front end. And now, when you go against, you know, when you go and browsing properties, you're looking at rates you actually going into distributed database cash on, you know, in using the latest cloud native technologies that could be dropped every two weeks or everything three or four days for my mobile application. And it's only when it goes, you know, when the transaction goes back, to reserve the room that it goes back there. So we're seeing a lot of power with digital decoupling, But we still need to take advantage of, you know, we've got these legacy applications. So So the data centers air really were trying to evolve them. And really, just, you know, how do we learn everything from the world of public and struck to bring those saints similar type efficiencies to the to the world of private? And really, what we're seeing is this emerging approach where I can start to take advantage of the innovation cycles. The land is that, you know, the red shifts the functions of the public world, but then maybe keep some of my more business critical regulated workloads. You know, that's the other side of the private side, right? I've got G X p compliance. If I've got hip, a data that I need to worry about GDP are there, you know, the whole set of regular two requirements. Now, over time, we do anticipate the public guys will get much better and more compliant. In fact, they made great headway already, but they're still not a number of clients are still, you know, not 100% comfortable from my client's perspective. >> Gotta meet Teresa Carlson. She'll change him, runs that AWS public sector is doing amazing things, obviously with big government contracts. But but you raise real inching point later. You almost described what I would say is really a hybrid application in this in this hotel example that you use because it's is, you know, kind of breaking the application and leveraging micro service is to do things around the core that allowed to take advantage of some this agility and hyper fast development, yet still maintain that core stuff that either doesn't need to move. Works fine, be too expensive. Drea Factor. It's a real different weight. Even think about workloads and applications into breaking those things into bits. >> And we see that pattern all over the place. I'm gonna give you the hotel Example Where? But finance, you know, look at financial service. Is retail banking so open banking a lot. All those rito applications are on the mainframe. I'm insurance claims and and you look at it the business value of replicating a lot of like the regulatory stuff, the locality stuff. It doesn't make sense to write it. There's no rule inherent business values of I can wrap it, expose it and in a micro service's architecture now D'oh cloud native front end. That's gonna give me a 360 view a customer, Change the customer experience. You know, I've got a much you know, I can still get that agility. The innovation cycles by public. Bye bye. Wrapping my legacy environment >> and percent you raided, jump in and I'll give you something to react to, Which is which is the single planet glass right now? How do I How did I manage all this stuff now? Not only do I have distributed infrastructure now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single pane of glass. Everybody wants to be the app that's upon everybody. Screen. How are you seeing people deal with the management complexity of these kind of distributed infrastructures? If you will Yeah, >> I think that that's that's an area that's, ah, actually very topical these days because, you know, you're starting to see more and more workers go to private cloud. And so you've got a hybrid infrastructure you're starting to see move movement from just using the EMS to, you know, cantinas and Cuba needs. And, you know, we talked about Serval s and so on. So all of our clients are looking for a way, and you have different types of users as well. Yeah, developers. You have data scientists. You have, you know, operators and so on. So they're all looking for that control plane that allows them access and a view toe everything that is out there that is being used in the enterprise. And that's where I think you know, a company like Accenture were able to use the best of breed toe provide that visibility to our clients, >> right? Yeah. I mean, you hit the nail on the head. It's becoming, you know, with all the promises, cloud and all the power. And these new architectures is becoming much more dynamic, ephemeral, with containers and kubernetes with service computing that that that one application for the hotel, they're actually started in. They've got some, actually, now running a native us of their containers and looking at surveillance. So you're gonna even a single application can span that. And one of things we've seen is is first, you know, a lot of our clients used to look at, you know, application management, you know, different from their their infrastructure. And the lines are now getting very blurry. You need to have very tight alignment. You take that single application, if any my public side goes down or my mid tier with my you know, you know, open shipped on VM, where it goes down on my back and mainframe goes down. Or the networks that connected to go down the devices that talk to it. It's a very well. Despite the power, it's a very complex environment. So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, Application Service's teams that do that Application manager, an optimization cloud infrastructure. How do we get better alignment that are embedded security, You know, how do you know what are managed to security service is bringing those together. And then what we did was we looked at, you know, we got very aggressive with cloud for a strategy and, you know, how do we manage the world of public? But when looking at the public providers of hyper scale, er's and how they hit Incredible degrees of automation. We really looked at, said and said, Hey, look, you gotta operate differently in this new world. What can we learn from how the public guys we're doing that We came up with this concept. We call it running different. You know, how do you operate differently in this new multi speed? You know, you know, hot, very hybrid world across public, private demon, legacy, environment, and start a look and say, OK, what is it that they do? You know, first they standardize, and that's one of the big challenges you know, going to almost all of our clients in this a sprawl. And you know, whether it's application sprawl, its infrastructure, sprawl >> and my business is so unique. The Larry no business out there has the same process that way. So >> we started make you know how to be standardized like center hybrid cloud solution important with hp envy And where we how do we that was an example of so we can get to you because you can't automate unless you standardise. So that was the first thing you know, standardizing our service catalog. Standardizing that, um you know, the next thing is the operating model. They obviously operate differently. So we've been putting a lot of time and energy and what I call a cloud and agile operating model. And also a big part of that is truly you hear a lot about Dev ops right now. But truly putting the security and and operations into Deb said cops are bringing, you know, the development in the operations much tied together. So spending a lot of time looking at that and transforming operations re Skilling the people you know, the operators of the future aren't eyes on glass there. Developers, they're writing the data ingestion, the analytic algorithms, you know, to do predictive operations. They're riding the automation script to take work, you know, test work out right. And over time they'll be tuning the aye aye engines to really optimize environment. And then finally, has Prasad alluded to Is that the platforms that control planes? That doing that? So, you know what we've been doing is we've had a significant investments in the eccentric cloud platform, our infrastructure automation platforms, and then the application teams with it with my wizard framework, and we started to bring that together you know, it's an integrated control plane that can plug into our clients environments to really manage seamlessly, you know, and provide. You know, it's automation. Analytics. Aye, aye. Across APS, cloud infrastructure and even security. Right. And that, you know, that really is a I ops, right? I mean, that's delivering on, you know, as the industry starts toe define and really coalesce around, eh? I ops. That's what we you A ups. >> So just so I'm clear that so it's really your layer your software layer kind of management layer that that integrates all these different systems and provides kind of a unified view. Control? Aye, aye. Reporting et cetera. Right? >> Exactly. Then can plug in and integrate, you know, third party tools to do straight functions. >> I'm just I'm just curious is one of the themes that we here out in the press right now is this is this kind of pull back of public cloud app, something we're coming back. Or maybe it was, you know, kind of a rush. Maybe a little bit too aggressively. What are some of the reasons why people are pulling stuff back out of public clouds that just with the wrong. It was just the wrong application. The costs were not what we anticipated to be. We find it, you know, what are some of the reasons that you see after coming back in house? Yeah, I think it's >> a variety of factors. I mean, it's certainly cost, I think is one. So as there are multiple private options and you know, we don't talk about this, but the hyper skills themselves are coming out with their own different private options like an tars and out pulls an actor stack and on. And Ali Baba has obsessed I and so on. So you see a proliferation of that, then you see many more options around around private cloud. So I think the cost is certainly a factor. The second is I think data gravity is, I think, a very important point because as you're starting to see how different applications have to work together, then that becomes a very important point. The third is just about compliance, and, you know, the regulatory environment. As we look across the globe, even outside the U. S. We look at Europe and other parts of Asia as clients and moving more to the cloud. You know that becomes an important factor. So as you start to balance these things, I think you have to take a very application centric view. You see some of those some some maps moving back, and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private cloud and then tomorrow you can move this. Since it's been containerized to run on public and it's, you know, it's all managed. That left >> E. I mean, cost is a big factor if you actually look at it. Most of our clients, you know, they typically you were a big cap ex businesses, and all of a sudden they're using this consumption, you know, consumption model. And they went, really, they didn't have a function to go and look at be thousands or millions of lines of it, right? You know, as your statement Exactly. I think they misjudged, you know, some of the scale on Do you know e? I mean, that's one of the reasons we started. It's got to be an application led, you know, modernization, that really that will dictate that. And I think In many cases, people didn't. May not have thought Through which application. What data? There The data, gravity data. Gravity's a conversation I'm having just by with every client right now. And if I've got a 64 terabyte Hana and that's the core, my crown jewels that data, you know, how do I get that to tensorflow? How'd I get that? >> Right? But if Andy was here, though, and he would say we'll send down the stove, the snow came from which virgin snow plows? Snowball Snowball. Well, they're snowballs. But I have seen the whole truck killer that comes out and he'd say, Take that and stick it in the cloud. Because if you've got that data in a single source right now, you can apply multitude of applications across that thing. So they, you know, they're pushing. Get that date end in this single source. Of course. Then to move it, change it. You know, you run into all these micro lines of billing statement, take >> the hotel. I mean, their data stolen the mainframe, so if they anyone need to expose it, Yeah, they have a database cash, and they move it out, You know, particulars of data sets get larger, it becomes, you know, the data. Gravity becomes a big issue because no matter how much you know, while Moore's Law might be might have elongated from 18 to 24 months, the network will always be the bottle Mac. So ultimately, we're seeing, you know, a CZ. We proliferate more and more data, all data sets get bigger and better. The network becomes more of a bottleneck. And that's a It's a lot of times you gotta look at your applications. They have. I've got some legacy database I need to get Thio. I need this to be approximately somewhere where I don't have, you know, high bandwith. Oh, all right. Or, you know, highlight and see type. Also, egress costs a pretty big deals. My date is up in the cloud, and I'm gonna get charged for pulling it off. You know, that's being a big issue, >> you know, it's funny, I think, and I think a lot of the the issue, obviously complexity building. It's a totally from building model, but I think to a lot of people will put stuff in a public cloud and then operated as if they bought it and they're running in the data center in this kind of this. Turn it on, Turn it off when you need it. Everyone turns. Everyone loves to talk about the example turning it on when you need it. But nobody ever talks about turning it off when you don't. But it kind of close on our conversation. I won't talk about a I and applied a Iot because he has a lot of talk in the market place. But, hey, I'm machine learning. But as you guys know pride better than anybody, it's the application of a I and specific applications, which really on unlocks the value. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I in a management layer like your run differently, set up to actually know when to turn things on, when to turn things off when you moved in but not moved, it's gonna have to be machines running that right cause the data sets and the complexity of these systems is going to be just overwhelming. Yeah, yeah, >> absolutely. Completely agree with you. In fact, attack sensual. We actually refer to this whole area as applied intelligence on That's our guy, right? And it is absolutely to add more and more automation move everything Maur toe where it's being run by the machine rather than you know, having people really working on these things >> yet, e I mean, if you think you hit the nail on the head, we're gonna a eyes e. I mean, given how things getting complex, more ephemeral, you think about kubernetes et cetera. We're gonna have to leverage a humans or not to be able to get, you know, manage this. The environments comported right. What's interesting way we've used quite effectively for quite some time. But it's good at some stuff, not good at others. So we find it's very good at, like, ticket triage, like ticket triage, chicken rounding et cetera. You know, any time we take over account, we tune our AI ai engines. We have ticket advisers, etcetera. That's what probably got the most, you know, most bang for the buck. We tried in the network space, less success to start even with, you know, commercial products that were out there. I think where a I ultimately bails us out of this is if you look at the problem. You know, a lot of times we talked about optimizing around cost, but then performance. I mean, and it's they they're somewhat, you know, you gotta weigh him off each other. So you've got a very multi dimensional problem on howto I optimize my workloads, particularly. I gotta kubernetes cluster and something on Amazon, you know, sums running on my private cloud, etcetera. So we're gonna get some very complex environment. And the only way you're gonna be ableto optimize across multi dimensions that cost performance service levels, you know, And then multiple options don't do it public private, You know, what's my network costs etcetera. Isn't a I engine tuning that ai ai engines? So ultimately, I mean, you heard me earlier on the operators. I think you know, they write the analytic albums, they do the automation scripts, but they're the ultimate one too. Then tune the aye aye engines that will manage our environment. And I think it kubernetes will be interesting because it becomes a link to the control plane optimize workload placement. You know, between >> when the best thing to you, then you have dynamic optimization. Could you might be optimizing eggs at us right now. But you might be optimizing for output the next day. So exists really a you know, kind of Ah, never ending when you got me. They got to see them >> together with you and multi dimension. Optimization is very difficult. So I mean, you know, humans can't get their head around. Machines can, but they need to be trained. >> Well, Prasad, Larry, Lots of great opportunities for for centuries bring that expertise to the tables. So thanks for taking a few minutes to walk through some of these things. Our pleasure. Thank you, Grace. Besides Larry, I'm Jeff. You're watching the Cube. We are high above San Francisco in the Salesforce Tower, Theis Center, Innovation hub in San Francisco. Thanks for watching. We'll see you next time.

Published Date : Sep 9 2019

SUMMARY :

They think you had it. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. And you took it back It isn't just the tallest building in to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. And I think and tell me if you agree, I think really, what? and not not that it sold by any means that you know, it's always giving an ongoing problem. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, There's certain attributes is that you need to think about and yet from the application point of view before I think you know, we have to obviously start from an application centric perspective. you know, you know, with our tech advisory guys coming in, there are intelligent engineering And you know, So you know, you're basically doing the re factoring and the modernization and then certain is execution speed if you can get it. So So it's really I t is really trying to step up and, you know, enabled the business toe How do you help your customers think about the definition? you know, come to ah, you know, the same kind of definition on multi cloud. And it's only when it goes, you know, when the transaction goes back, is, you know, kind of breaking the application and leveraging micro service is to do things around the core You know, I've got a much you know, I can still get that agility. now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single And that's where I think you know, So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, So So that was the first thing you know, standardizing our service catalog. So just so I'm clear that so it's really your layer your software layer kind Then can plug in and integrate, you know, third party tools to do straight functions. We find it, you know, what are some of the reasons and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private It's got to be an application led, you know, modernization, that really that will dictate that. So they, you know, they're pushing. So ultimately, we're seeing, you know, a CZ. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I add more and more automation move everything Maur toe where it's being run by the machine rather than you I think you know, they write the analytic albums, they do the automation scripts, So exists really a you know, kind of Ah, So I mean, you know, We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LarryPERSON

0.99+

Teresa CarlsonPERSON

0.99+

IBMORGANIZATION

0.99+

JeffPERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Prasad SankaranPERSON

0.99+

PrasadPERSON

0.99+

Larry SoccerPERSON

0.99+

GracePERSON

0.99+

Prasad SankaranPERSON

0.99+

twoQUANTITY

0.99+

AccentureORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

millionsQUANTITY

0.99+

threeQUANTITY

0.99+

AsiaLOCATION

0.99+

San FranciscoLOCATION

0.99+

Larry SocherPERSON

0.99+

thousandsQUANTITY

0.99+

100%QUANTITY

0.99+

EuropeLOCATION

0.99+

todayDATE

0.99+

DebPERSON

0.99+

Jefe RickPERSON

0.99+

30QUANTITY

0.99+

four yearsQUANTITY

0.99+

18QUANTITY

0.99+

520,000 applicationsQUANTITY

0.99+

tomorrowDATE

0.99+

four daysQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

24 monthsQUANTITY

0.99+

78 years agoDATE

0.98+

thirdQUANTITY

0.98+

single sourceQUANTITY

0.98+

U. S.LOCATION

0.98+

64 terabyteQUANTITY

0.98+

one applicationQUANTITY

0.97+

two requirementsQUANTITY

0.97+

FirstQUANTITY

0.97+

360 viewQUANTITY

0.97+

520,000QUANTITY

0.96+

single applicationQUANTITY

0.96+

more than one cloudQUANTITY

0.96+

SiebelORGANIZATION

0.96+

six ordersQUANTITY

0.96+

one hotelQUANTITY

0.95+

Intelligent CloudORGANIZATION

0.95+

egressORGANIZATION

0.95+

Salesforce TowerLOCATION

0.94+

secondQUANTITY

0.94+

Ali BabaPERSON

0.93+

ServalORGANIZATION

0.93+

3 65OTHER

0.93+

Theis CenterLOCATION

0.92+

CubaLOCATION

0.92+

single paneQUANTITY

0.92+

singleQUANTITY

0.92+

Moore's LawTITLE

0.91+

every two weeksQUANTITY

0.9+

hp envyORGANIZATION

0.89+

SASORGANIZATION

0.89+

CubeORGANIZATION

0.88+

DukeORGANIZATION

0.87+

about six months agoDATE

0.87+

Global 2000ORGANIZATION

0.85+

CenterORGANIZATION

0.84+

Accenture TechnologyORGANIZATION

0.82+

cloudORGANIZATION

0.81+

Prasad Sankaran & Larry Socher, Accenture Technology | Accenture Innovation Day


 

>> Hey, welcome back. Your body, Jefe Rick here from the Cube were high atop San Francisco in the century innovation hub. It's in the middle of the Salesforce Tower. It's a beautiful facility. They think you had it. The grand opening about six months ago. We're here for the grand opening. Very cool space. I got maker studios. They've got all kinds of crazy stuff going on. But we're here today to talk about Cloud in this continuing evolution about cloud in the enterprise and hybrid cloud and multi cloud in Public Cloud and Private Cloud. And we're really excited to have a couple of guys who really helping customers make this journey, cause it's really tough to do by yourself. CEOs are super busy. There were about security and all kinds of other things, so centers, often a trusted partner. We got two of the leaders from center joining us today's Prasad Sankaran. He's the senior managing director of Intelligent Cloud infrastructure for Center Welcome and Larry Soccer, the global managing director. Intelligent cloud infrastructure offering from central gentlemen. Welcome. I love it. It intelligent cloud. What is an intelligent cloud all about? Got it in your title. It must mean something pretty significant. >> Yeah, I think First of all, thank you for having us, but yeah, absolutely. Everything's around becoming more intelligent around using more automation. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. All of our clients are moving. So it's all about bringing the intelligence not only into infrastructure, but also into cloud generally. And it's all driven by software, >> right? It's just funny to think where we are in this journey. We talked a little bit before we turn the cameras on and there you made an interesting comment when I said, You know, when did this cloud for the Enterprise start? And you took it back to sass based applications, which, >> you know you were sitting in the sales force builder. >> That's true. It isn't just the tallest building in >> everyone's, you know, everyone's got a lot of focus on AWS is rise, etcetera. But the real start was really getting into sass. I mean, I remember we used to do a lot of Siebel deployments for CR M, and we started to pivot to sales, for some were moving from remedy into service now. I mean, we've went through on premise collaboration, email thio 3 65 So So we've actually been at it for quite a while in the particularly the SAS world. And it's only more recently that we started to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. But But this journey started, you know, it was that 78 years ago that we really started. See some scale around it. >> And I think and tell me if you agree, I think really, what? The sales forces of the world and and the service now is of the world office 3 65 kind of broke down some of those initial beers, which are all really about security and security, security, security, Always to hear where now security is actually probably an attributes and loud can brink. >> Absolutely. In fact, I mean, those barriers took years to bring down. I still saw clients where they were forcing salesforce tor service Now to put, you know, instances on prime and I think I think they finally woke up toe. You know, these guys invested ton in their security organizations. You know there's a little of that needle in the haystack. You know, if you breach a data set, you know what you're getting after. But when Europe into sales force, it's a lot harder. And so you know. So I think that security problems have certainly gone away. We still have some compliance, regulatory things, data sovereignty. But I think security and not not that it sold by any means that you know, it's always giving an ongoing problem. But I think they're getting more comfortable with their data being up in the in the public domain, right? Not public. >> And I think it also helped them with their progress towards getting cloud native. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, and you did some level of custom development around it. And now I think that's paved the way for more complex applications and different workloads now going into, you know, the public cloud and the private cloud. But that's the next part of the journey, >> right? So let's back up 1/2 a step, because then, as you said, a bunch of stuff then went into public cloud, right? Everyone's putting in AWS and Google. Um, IBM has got a public how there was a lot more. They're not quite so many as there used to be, Um, but then we ran into a whole new host of issues, right, which is kind of opened up this hybrid cloud. This multi cloud world, which is you just can't put everything into a public clouds. There's certain attributes is that you need to think about and yet from the application point of view before you decide where you deploy that. So I'm just curious. If you can share now, would you guys do with clients? How should they think about applications? How should they think about what to deploy where I >> think I'll start in? The military has a lot of expertise in this area. I think you know, we have to obviously start from an application centric perspective. You go to take a look at you know where your applications have to live water. What are some of the data implications on the applications, or do you have by way of regulatory and compliance issues, or do you have to do as faras performance because certain applications have to be in a high performance environment. Certain other applications don't think a lot of these factors will. Then Dr where these applications need to recite and then what we think in today's world is really accomplish. Complex, um, situation where you have a lot of legacy. But you also have private as well as public cloud. So you approach it from an application perspective. >> Yeah. I mean, if you really take a look at Army, you look at it centers clients, and we were totally focused on up into the market Global 2000 savory. You know how clients typically have application portfolios ranging from 520,000 applications? And really, I mean, if you think about the purpose of cloud or even infrastructure for that, they're there to serve the applications. No one cares if your cloud infrastructure is not performing the absolute. So we start off with an application monetization approach and ultimately looking, you know, you know, with our tech advisory guys coming in, there are intelligent engineering service is to do the cloud native and at mod work our platforms, guys, who do you know everything from sales forward through ASAP. They should drive a strategy on how those applications gonna evolve with its 520,000 and determined hey, and usually using some, like the six orders methodology. And I'm I am I going to retire this Am I going to retain it? And, you know, I'm gonna replace it with sass. Am I gonna re factor in format? And it's ultimately that strategy that's really gonna dictate a multi and, you know, every cloud story. So it's based on the applications data, gravity issues where they gonna reside on their requirements around regulatory, the requirements for performance, etcetera. That will then dictate the cloud strategies. I'm you know, not a big fan of going in there and just doing a multi hybrid cloud strategy without a really good up front application portfolio approach, right? How we gonna modernize that >> it had. And how do you segment? That's a lot of applications. And you know, how do you know the old thing? How do you know that one by that time, how do you help them pray or size where they should be focusing on us? >> So typically what we do is work with our clients to do a full application portfolio analysis, and then we're able to then segment the applications based on, you know, important to the business and some of the factors that both of us mentioned. And once we have that, then we come up with an approach where certain sets of applications he moved to sass certain other applications you move to pass. So you know, you're basically doing the re factoring and the modernization and then certain others you know, you can just, you know, lift and shift. So it's really a combination off both modernization as well as migration. It's a combination off that, but to do that, you have to initially look at the entire set of applications and come up with that approach. >> I'm just curious where within that application assessment, um, where is cost savings? Where is, uh, this is just old. And where is opportunities to innovate faster? Because we know a lot of lot of talk really. Days has cost savings, but what the real advantages is execution speed if you can get it. If >> you could go back through four years and we had there was a lot of CEO discussions around cost savings, I'm not really have seen our clients shift. It costs never goes away, obviously right. But there's a lot greater emphasis now on business agility. You know, howto innovate faster, get getting your capabilities to market faster, to change my customer experience. So So it's really I t is really trying to step up and, you know, enabled the business toe to compete in the marketplace. We're seeing a huge shift in emphasis or focus at least starting with, you know, how'd I get better business agility outta leverage to cloud and cloud native development to get their upper service levels? Actually, we started seeing increase on Hey, you know, these applications need to work. It's actress. So So Obviously, cost still remains a factor, but we seem much more for, you know, much more emphasis on agility, you know, enabling the business on, given the right service levels of right experience to the user, little customers. Big pivot there, >> Okay. And let's get the definitions out because you know a lot of lot of conversation about public clouds, easy private clouds, easy but hybrid cloud and multi cloud and confusion about what those are. How do you guys define him? How do you help your customers think about the definition? Yes, >> I think it's a really good point. So what we're starting to see is there were a lot of different definitions out there. But I think as I talked more clients and our partners, I think we're all starting to, you know, come to ah, you know, the same kind of definition on multi cloud. It's really about using more than one cloud. But hybrid, I think, is a very important concept because hybrid is really all about the placement off the workload or where your application is going to run on. And then again, it goes to all of these points that we talked about data, gravity and performance and other things. Other factors. But it's really all about where do you place the specific look >> if you look at that, so if you think about public, I mean obviously gives us the innovation of the public providers. You look at how fast Amazon comes out with new versions of Lambda etcetera. So that's the innovations there obviously agility. You could spend up environments very quickly, which is, you know, one of the big benefits of it. The consumption, economic models. So that is the number of drivers that are pushing in the direction of public. You know, on the private side, they're still it's quite a few benefits that don't get talked about as much. Um, so you know, if you look at it, um, performance if you think the public world, you know, Although they're scaling up larger T shirts, et cetera, they're still trying to do that for a large array of applications on the private side, you can really Taylor somethingto very high performance characteristics. Whether it's you know, 30 to 64 terabyte Hana, you can get a much more focused precision environment for business. Critical workloads like that article, article rack, the Duke clusters, everything about fraud analysis. So that's a big part of it. Related to that is the data gravity that Prasad just mentioned. You know, if I've got a 64 terabyte Hana database you know, sitting in my private cloud, it may not be that convenient to go and put get that data shared up in red shift or in Google's tensorflow. So So there's some data gravity out. Networks just aren't there. The laden sea of moving that stuff around is a big issue. And then a lot of people of investments in their data centers. I mean, the other piece, that's interesting. His legacy, you know, you know, as we start to look at the world a lot, there's a ton of code still living in, You know, whether it's you, nick system, just IBM mainframes. There's a lot of business value there, and sometimes the business cases aren't aren't necessarily there toe to replace them. Right? And in world of digital, the decoupling where I can start to use micro service is we're seeing a lot of trends. We worked with one hotel to take their reservation system. You know, Rapid and Micro Service is, um, we then didn't you know, open shift couch base, front end. And now, when you go against, you know, when you go and browsing properties, you're looking at rates you actually going into distributed database cash on, you know, in using the latest cloud native technologies that could be dropped every two weeks or everything three or four days for my mobile application. And it's only when it goes, you know, when the transaction goes back, to reserve the room that it goes back there. So we're seeing a lot of power with digital decoupling, But we still need to take advantage of, you know, we've got these legacy applications. So So the data centers air really were trying to evolve them. And really, just, you know, how do we learn everything from the world of public and struck to bring those saints similar type efficiencies to the to the world of private? And really, what we're seeing is this emerging approach where I can start to take advantage of the innovation cycles. The land is that, you know, the red shifts the functions of the public world, but then maybe keep some of my more business critical regulated workloads. You know, that's the other side of the private side, right? I've got G X p compliance. If I've got hip, a data that I need to worry about GDP are there, you know, the whole set of regular two requirements. Now, over time, we do anticipate the public guys will get much better and more compliant. In fact, they made great headway already, but they're still not a number of clients are still, you know, not 100% comfortable from my client's perspective. >> Gotta meet Teresa Carlson. She'll change him, runs that AWS public sector is doing amazing things, obviously with big government contracts. But but you raise real inching point later. You almost described what I would say is really a hybrid application in this in this hotel example that you use because it's is, you know, kind of breaking the application and leveraging micro service is to do things around the core that allowed to take advantage of some this agility and hyper fast development, yet still maintain that core stuff that either doesn't need to move. Works fine, be too expensive. Drea Factor. It's a real different weight. Even think about workloads and applications into breaking those things into bits. >> And we see that pattern all over the place. I'm gonna give you the hotel Example Where? But finance, you know, look at financial service. Is retail banking so open banking a lot. All those rito applications are on the mainframe. I'm insurance claims and and you look at it the business value of replicating a lot of like the regulatory stuff, the locality stuff. It doesn't make sense to write it. There's no rule inherent business values of I can wrap it, expose it and in a micro service's architecture now D'oh cloud native front end. That's gonna give me a 360 view a customer, Change the customer experience. You know, I've got a much you know, I can still get that agility. The innovation cycles by public. Bye bye. Wrapping my legacy environment >> and percent you raided, jump in and I'll give you something to react to, Which is which is the single planet glass right now? How do I How did I manage all this stuff now? Not only do I have distributed infrastructure now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single pane of glass. Everybody wants to be the app that's upon everybody. Screen. How are you seeing people deal with the management complexity of these kind of distributed infrastructures? If you >> will Yeah, I think that that's that's an area that's, ah, actually very topical these days because, you know, you're starting to see more and more workers go to private cloud. And so you've got a hybrid infrastructure you're starting to see move movement from just using the EMS to, you know, cantinas and Cuba needs. And, you know, we talked about Serval s and so on. So all of our clients are looking for a way, and you have different types of users as well. Yeah, developers. You have data scientists. You have, you know, operators and so on. So they're all looking for that control plane that allows them access and a view toe everything that is out there that is being used in the enterprise. And that's where I think you know, a company like Accenture were able to use the best of breed toe provide that visibility to our clients, >> right? Yeah. I mean, you hit the nail on the head. It's becoming, you know, with all the promises, cloud and all the power. And these new architectures is becoming much more dynamic, ephemeral, with containers and kubernetes with service computing that that that one application for the hotel, they're actually started in. They've got some, actually, now running a native us of their containers and looking at surveillance. So you're gonna even a single application can span that. And one of things we've seen is is first, you know, a lot of our clients used to look at, you know, application management, you know, different from their their infrastructure. And the lines are now getting very blurry. You need to have very tight alignment. You take that single application, if any my public side goes down or my mid tier with my you know, you know, open shipped on VM, where it goes down on my back and mainframe goes down. Or the networks that connected to go down the devices that talk to it. It's a very well. Despite the power, it's a very complex environment. So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, Application Service's teams that do that Application manager, an optimization cloud infrastructure. How do we get better alignment that are embedded security, You know, how do you know what are managed to security service is bringing those together. And then what we did was we looked at, you know, we got very aggressive with cloud for a strategy and, you know, how do we manage the world of public? But when looking at the public providers of hyper scale, er's and how they hit Incredible degrees of automation. We really looked at, said and said, Hey, look, you gotta operate differently in this new world. What can we learn from how the public guys we're doing that We came up with this concept. We call it running different. You know, how do you operate differently in this new multi speed? You know, you know, hot, very hybrid world across public, private demon, legacy, environment, and start a look and say, OK, what is it that they do? You know, first they standardize, and that's one of the big challenges you know, going to almost all of our clients in this a sprawl. And you know, whether it's application sprawl, its infrastructure, sprawl >> and my business is so unique. The Larry no business out there has the same process that way. So >> we started make you know how to be standardized like center hybrid cloud solution important with hp envy And where we how do we that was an example of so we can get to you because you can't automate unless you standardise. So that was the first thing you know, standardizing our service catalog. Standardizing that, um you know, the next thing is the operating model. They obviously operate differently. So we've been putting a lot of time and energy and what I call a cloud and agile operating model. And also a big part of that is truly you hear a lot about Dev ops right now. But truly putting the security and and operations into Deb said cops are bringing, you know, the development in the operations much tied together. So spending a lot of time looking at that and transforming operations re Skilling the people you know, the operators of the future aren't eyes on glass there. Developers, they're writing the data ingestion, the analytic algorithms, you know, to do predictive operations. They're riding the automation script to take work, you know, test work out right. And over time they'll be tuning the aye aye engines to really optimize environment. And then finally, has Prasad alluded to Is that the platforms that control planes? That doing that? So, you know what we've been doing is we've had a significant investments in the eccentric cloud platform, our infrastructure automation platforms, and then the application teams with it with my wizard framework, and we started to bring that together you know, it's an integrated control plane that can plug into our clients environments to really manage seamlessly, you know, and provide. You know, it's automation. Analytics. Aye, aye. Across APS, cloud infrastructure and even security. Right. And that, you know, that really is a I ops, right? I mean, that's delivering on, you know, as the industry starts toe define and really coalesce around, eh? I ops. That's what we you A ups. >> So just so I'm clear that so it's really your layer your software layer kind of management layer that that integrates all these different systems and provides kind of a unified view. Control? Aye, aye. Reporting et cetera. Right? >> Exactly. Then can plug in and integrate, you know, third party tools to do straight functions. >> I'm just I'm just curious is one of the themes that we here out in the press right now is this is this kind of pull back of public cloud app, something we're coming back. Or maybe it was, you know, kind of a rush. Maybe a little bit too aggressively. What are some of the reasons why people are pulling stuff back out of public clouds that just with the wrong. It was just the wrong application. The costs were not what we anticipated to be. We find it, you know, what are some of the reasons that you see after coming back in house? Yeah, I think it's >> a variety of factors. I mean, it's certainly cost, I think is one. So as there are multiple private options and you know, we don't talk about this, but the hyper skills themselves are coming out with their own different private options like an tars and out pulls an actor stack and on. And Ali Baba has obsessed I and so on. So you see a proliferation of that, then you see many more options around around private cloud. So I think the cost is certainly a factor. The second is I think data gravity is, I think, a very important point because as you're starting to see how different applications have to work together, then that becomes a very important point. The third is just about compliance, and, you know, the regulatory environment. As we look across the globe, even outside the U. S. We look at Europe and other parts of Asia as clients and moving more to the cloud. You know that becomes an important factor. So as you start to balance these things, I think you have to take a very application centric view. You see some of those some some maps moving back, and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private cloud and then tomorrow you can move this. Since it's been containerized to run on public and it's, you know, it's all managed. That >> left E. I mean, cost is a big factor if you actually look at it. Most of our clients, you know, they typically you were a big cap ex businesses, and all of a sudden they're using this consumption, you know, consumption model. And they went, really, they didn't have a function to go and look at be thousands or millions of lines of it, right? You know, as your statement Exactly. I think they misjudged, you know, some of the scale on Do you know e? I mean, that's one of the reasons we started. It's got to be an application led, you know, modernization, that really that will dictate that. And I think In many cases, people didn't. May not have thought Through which application. What data? There The data, gravity data. Gravity's a conversation I'm having just by with every client right now. And if I've got a 64 terabyte Hana and that's the core, my crown jewels that data, you know, how do I get that to tensorflow? How'd I get that? >> Right? But if Andy was here, though, and he would say we'll send down the stove, the snow came from which virgin snow plows? Snowball Snowball. Well, they're snowballs. But I have seen the whole truck killer that comes out and he'd say, Take that and stick it in the cloud. Because if you've got that data in a single source right now, you can apply multitude of applications across that thing. So they, you know, they're pushing. Get that date end in this single source. Of course. Then to move it, change it. You know, you run into all these micro lines of billing statement, take >> the hotel. I mean, their data stolen the mainframe, so if they anyone need to expose it, Yeah, they have a database cash, and they move it out, You know, particulars of data sets get larger, it becomes, you know, the data. Gravity becomes a big issue because no matter how much you know, while Moore's Law might be might have elongated from 18 to 24 months, the network will always be the bottle Mac. So ultimately, we're seeing, you know, a CZ. We proliferate more and more data, all data sets get bigger and better. The network becomes more of a bottleneck. And that's a It's a lot of times you gotta look at your applications. They have. I've got some legacy database I need to get Thio. I need this to be approximately somewhere where I don't have, you know, high bandwith. Oh, all right. Or, you know, highlight and see type. Also, egress costs a pretty big deals. My date is up in the cloud, and I'm gonna get charged for pulling it off. You know, that's being a big issue, >> you know, it's funny, I think, and I think a lot of the the issue, obviously complexity building. It's a totally from building model, but I think to a lot of people will put stuff in a public cloud and then operated as if they bought it and they're running in the data center in this kind of this. Turn it on, Turn it off when you need it. Everyone turns. Everyone loves to talk about the example turning it on when you need it. But nobody ever talks about turning it off when you don't. But it kind of close on our conversation. I won't talk about a I and applied a Iot because he has a lot of talk in the market place. But, hey, I'm machine learning. But as you guys know pride better than anybody, it's the application of a I and specific applications, which really on unlocks the value. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I in a management layer like your run differently, set up to actually know when to turn things on, when to turn things off when you moved in but not moved, it's gonna have to be machines running that right cause the data sets and the complexity of these systems is going to be just overwhelming. >> Yeah, yeah, absolutely. Completely agree with you. In fact, attack sensual. We actually refer to this whole area as applied intelligence on That's our guy, right? And it is absolutely to add more and more automation move everything Maur toe where it's being run by the machine rather than you know, having people really working on these things >> yet, e I mean, if you think you hit the nail on the head, we're gonna a eyes e. I mean, given how things getting complex, more ephemeral, you think about kubernetes et cetera. We're gonna have to leverage a humans or not to be able to get, you know, manage this. The environments comported right. What's interesting way we've used quite effectively for quite some time. But it's good at some stuff, not good at others. So we find it's very good at, like, ticket triage, like ticket triage, chicken rounding et cetera. You know, any time we take over account, we tune our AI ai engines. We have ticket advisers, etcetera. That's what probably got the most, you know, most bang for the buck. We tried in the network space, less success to start even with, you know, commercial products that were out there. I think where a I ultimately bails us out of this is if you look at the problem. You know, a lot of times we talked about optimizing around cost, but then performance. I mean, and it's they they're somewhat, you know, you gotta weigh him off each other. So you've got a very multi dimensional problem on howto I optimize my workloads, particularly. I gotta kubernetes cluster and something on Amazon, you know, sums running on my private cloud, etcetera. So we're gonna get some very complex environment. And the only way you're gonna be ableto optimize across multi dimensions that cost performance service levels, you know, And then multiple options don't do it public private, You know, what's my network costs etcetera. Isn't a I engine tuning that ai ai engines? So ultimately, I mean, you heard me earlier on the operators. I think you know, they write the analytic albums, they do the automation scripts, but they're the ultimate one too. Then tune the aye aye engines that will manage our environment. And I think it kubernetes will be interesting because it becomes a link to the control plane optimize workload placement. You know, between >> when the best thing to you, then you have dynamic optimization. Could you might be optimizing eggs at us right now. But you might be optimizing for output the next day. So exists really a you know, kind of Ah, never ending when you got me. They got to see them >> together with you and multi dimension. Optimization is very difficult. So I mean, you know, humans can't get their head around. Machines can, but they need to be trained. >> Well, Prasad, Larry, Lots of great opportunities for for centuries bring that expertise to the tables. So thanks for taking a few minutes to walk through some of these things. Our pleasure. Thank you, Grace. Besides Larry, I'm Jeff. You're watching the Cube. We are high above San Francisco in the Salesforce Tower, Theis Center, Innovation hub in San Francisco. Thanks for watching. We'll see you next time.

Published Date : Aug 28 2019

SUMMARY :

They think you had it. And the work that, you know we delivered to our clients and cloud, as you know, is the platform to reach. And you took it back It isn't just the tallest building in to see that kind of push to the, you know, the public pass, and it's starting to cloud native development. And I think and tell me if you agree, I think really, what? and not not that it sold by any means that you know, it's always giving an ongoing problem. So, you know, you pick certain applications which were obviously hosted by sales force and other companies, There's certain attributes is that you need to think about and yet from the application point of view before I think you know, we have to obviously start from an application centric you know, you know, with our tech advisory guys coming in, there are intelligent engineering And you know, and then we're able to then segment the applications based on, you know, important to the business is execution speed if you can get it. So So it's really I t is really trying to step up and, you know, enabled the business toe How do you help your customers think about the definition? you know, come to ah, you know, the same kind of definition on multi cloud. And it's only when it goes, you know, when the transaction goes back, is, you know, kind of breaking the application and leveraging micro service is to do things around the core You know, I've got a much you know, I can still get that agility. now, I've got distributed applications in the and the thing that you just described and everyone wants to be that single And that's where I think you know, a company like Accenture were able to use So what we've been doing is first we've been looking at, you know, how do we get better synergy across what we you know, So the analytic algorithms, you know, to do predictive operations. So just so I'm clear that so it's really your layer your software layer kind Then can plug in and integrate, you know, third party tools to do straight functions. We find it, you know, what are some of the reasons and and I think that's the part of the hybrid world is that you know, you can have a nap running on the private It's got to be an application led, you know, modernization, that really that will dictate that. So they, you know, they're pushing. So ultimately, we're seeing, you know, a CZ. And as we're sitting here talking about this complexity, I can't help but think that, you know, applied a I by the machine rather than you know, having people really working on these things I think you know, they write the analytic albums, they do the automation scripts, So exists really a you know, kind of Ah, So I mean, you know, We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LarryPERSON

0.99+

Teresa CarlsonPERSON

0.99+

IBMORGANIZATION

0.99+

JeffPERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Prasad SankaranPERSON

0.99+

PrasadPERSON

0.99+

Larry SoccerPERSON

0.99+

GracePERSON

0.99+

twoQUANTITY

0.99+

AccentureORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

millionsQUANTITY

0.99+

threeQUANTITY

0.99+

San FranciscoLOCATION

0.99+

AsiaLOCATION

0.99+

thousandsQUANTITY

0.99+

Prasad SankaranPERSON

0.99+

100%QUANTITY

0.99+

EuropeLOCATION

0.99+

todayDATE

0.99+

DebPERSON

0.99+

Jefe RickPERSON

0.99+

30QUANTITY

0.99+

four yearsQUANTITY

0.99+

18QUANTITY

0.99+

520,000 applicationsQUANTITY

0.99+

four daysQUANTITY

0.99+

tomorrowDATE

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

24 monthsQUANTITY

0.99+

78 years agoDATE

0.98+

Larry SocherPERSON

0.98+

thirdQUANTITY

0.98+

single sourceQUANTITY

0.98+

U. S.LOCATION

0.98+

64 terabyteQUANTITY

0.98+

one applicationQUANTITY

0.97+

two requirementsQUANTITY

0.97+

FirstQUANTITY

0.97+

360 viewQUANTITY

0.97+

Accenture TechnologyORGANIZATION

0.97+

520,000QUANTITY

0.96+

single applicationQUANTITY

0.96+

more than one cloudQUANTITY

0.96+

SiebelORGANIZATION

0.96+

six ordersQUANTITY

0.96+

one hotelQUANTITY

0.95+

Intelligent CloudORGANIZATION

0.95+

egressORGANIZATION

0.95+

Salesforce TowerLOCATION

0.94+

secondQUANTITY

0.94+

Ali BabaPERSON

0.93+

ServalORGANIZATION

0.93+

3 65OTHER

0.93+

CubaLOCATION

0.92+

Theis CenterLOCATION

0.92+

single paneQUANTITY

0.92+

singleQUANTITY

0.92+

Moore's LawTITLE

0.91+

every two weeksQUANTITY

0.9+

hp envyORGANIZATION

0.89+

SASORGANIZATION

0.89+

CubeORGANIZATION

0.88+

DukeORGANIZATION

0.87+

about six months agoDATE

0.87+

Global 2000ORGANIZATION

0.85+

CenterORGANIZATION

0.84+

cloudORGANIZATION

0.81+

Tom Davenport, Babson College | MIT CDOIQ 2019


 

>> from Cambridge, Massachusetts. It's the Cube covering M I T. Chief data officer and information quality Symposium 2019. Brought to you by Silicon Angle Media. >> Welcome back >> to M I. T. Everybody watching the Cube, The leader in live tech coverage. My name is Dave Volonte here with Paul Guillen. My co host, Tom Davenport, is here is the president's distinguished professor at Babson College. Huebel? Um, good to see again, Tom. Thanks for coming on. Glad to be here. So, yeah, this is, uh let's see. The 13th annual M I t. Cdo lucky. >> Yeah, sure. As this year. Our seventh. I >> think so. Really? Maybe we'll offset. So you gave a talk earlier? She would be afraid of the machines, Or should we embrace them? I think we should embrace them, because so far, they are not capable of replacing us. I mean, you know, when we hit the singularity, which I'm not sure we'll ever happen, But it's certainly not going happen anytime soon. We'll have a different answer. But now good at small, narrow task. Not so good at doing a lot of the things that we do. So I think we're fine. Although as I said in my talk, I have some survey data suggesting that large U. S. Corporations, their senior executives, a substantial number of them more than half would liketo automate as many jobs as possible. They say. So that's a little scary. But unfortunately for us human something, it's gonna be >> a while before they succeed. Way had a case last year where McDonald's employees were agitating for increasing the minimum wage and tThe e management used the threat of wrote of robotics sizing, hamburger making process, which can be done right to thio. Get them to back down. Are you think we're going to Seymour of four that were maybe a eyes used as a threat? >> Well, I haven't heard too many other examples. I think for those highly structured, relatively low level task, it's quite possible, particularly if if we do end up raising the minimum wage beyond a point where it's economical, pay humans to do the work. Um, but I would like to think that, you know, if we gave humans the opportunity, they could do Maur than they're doing now in many cases, and one of the things I was saying is that I think companies are. Generally, there's some exceptions, but most companies they're not starting to retrain their workers. Amazon recently announced they're going to spend 700,000,000 to retrain their workers to do things that a I and robots can't. But that's pretty rare. Certainly that level of commitment is very rare. So I think it's time for the companies to start stepping up and saying, How can we develop a better combination of humans and machines? >> The work by, you know, brain Nelson and McAfee, which is a little dated now. But it definitely suggests that there's some things to be concerned about. Of course, ultimately there prescription was one of an optimist and education, and yeah, on and so forth. But you know, the key point there is the machines have always replace humans, but now, in terms of cognitive functions, but you see it everywhere you drive to the airport. Now it's Elektronik billboards. It's not some person putting up the kiosks, etcetera, but you know, is you know, you've you've used >> the term, you know, paid the cow path. We don't want to protect the past from the future. All right, so, to >> your point, retraining education I mean, that's the opportunity here, isn't it? And the potential is enormous. Well, and, you know, let's face it, we haven't had much in the way of productivity improvements in the U. S. Or any other advanced economy lately. So we need some guests, you know, replacement of humans by machines. But my argument has always been You can handle innovation better. You can avoid sort of race to the bottom at automation sometimes leads to, if you think creatively about humans and machines working as colleagues. In many cases, you remember in the PC boom, I forget it with a Fed chairman was it might have been, Greenspan said, You can see progress everywhere except in the product. That was an M. I. T. Professor Robert Solow. >> OK, right, and then >> won the Nobel Prize. But then, shortly thereafter, there was a huge productivity boom. So I mean is there may be a pent up Well, God knows. I mean, um, everybody's wondering. We've been spending literally trillions on I t. And you would think that it would have led toe productivity, But you know, certain things like social media, I think reduced productivity in the workplace and you know, we're all chatting and talking and slacking and sewing all over the place. Maybe that's is not conducive to getting work done. It depends what you >> do with that social media here in our business. It's actually it's phenomenal to see political coverage these days, which is almost entirely consist of reprinting politicians. Tweets >> Exactly. I guess it's made life easier for for them all people reporters sitting in the White House waiting for a press conference. They're not >> doing well. There are many reporters left. Where do you see in your consulting work your academic work? Where do you see a I being used most effectively in organizations right now? And where do you think that's gonna be three years from now? >> Well, I mean, the general category of activity of use case is the sort of someone's calling boring I. It's data integration. One thing that's being discussed a lot of this conference, it's connecting your invoices to your contracts to see Did we actually get the stuff that we contracted for its ah, doing a little bit better job of identifying fraud and doing it faster so all of those things are quite feasible. They're just not that exciting. What we're not seeing are curing cancer, creating fully autonomous vehicles. You know, the really aggressive moonshots that we've been trying for a while just haven't succeeded at what if we kind of expand a I is gonna The rumor, trawlers. New cool stuff that's coming out. So considering all these new checks with detective Aye, aye, Blockchain new security approaches. When do you think that machines will be able to make better diagnoses than doctors? Well, I think you know, in a very narrow sense in some cases, that could do it now. But the thing is, first of all, take a radiologist, which is one of the doctors I think most at risk from this because they don't typically meet with patients and they spend a lot of time looking at images. It turns out that the lab experiments that say you know, these air better than human radiologist say I tend to be very narrow, and what one lab does is different from another lab. So it's just it's gonna take a very long time to make it into, you know, production deployment in the physician's office. We'll probably have to have some regulatory approval of it. You know, the lab research is great. It's just getting it into day to day. Reality is the problem. Okay, So staying in this context of digital a sort of umbrella topic, do you think large retail stores roll largely disappeared? >> Uh, >> some sectors more than others for things that you don't need toe, touch and feel, And soon before you're to them. Certainly even that obviously, it's happening more and more on commerce. What people are saying will disappear. Next is the human at the point of sale. And we've been talking about that for a while. In In grocery, Not so not achieve so much yet in the U. S. Amazon Go is a really interesting experiment where every time I go in there, I tried to shoplift. I took a while, and now they have 12 stores. It's not huge yet, but I think if you're in one of those jobs that a substantial chunk of it is automata ble, then you really want to start looking around thinking, What else can I do to add value to these machines? Do you think traditional banks will lose control of the payment system? Uh, No, I don't because the Finn techs that you see thus far keep getting bought by traditional bank. So my guess is that people will want that certainty. And you know, the funny thing about Blockchain way say in principle it's more secure because it's spread across a lot of different ledgers. But people keep hacking into Bitcoin, so it makes you wonder. I think Blockchain is gonna take longer than way thought as well. So, you know, in my latest book, which is called the Aye Aye Advantage, I start out talking by about Tamara's Law, This guy Roy Amara, who was a futurist, not nearly as well known as Moore's Law. But it said, You know, for every new technology, we tend to overestimate its impact in the short run and underestimated Long, long Ryan. And so I think a I will end up doing great things. We may have sort of tuned it out of the time. It actually happens way finally have autonomous vehicles. We've been talking about it for 50 years. Last one. So one of the Democratic candidates of the 75 Democratic ended last night mentioned the chief manufacturing officer Well, do you see that automation will actually swing the pendulum and bring back manufacturing to the U. S. I think it could if we were really aggressive about using digital technologies in manufacturing, doing three D manufacturing doing, um, digital twins of every device and so on. But we are not being as aggressive as we ought to be. And manufacturing companies have been kind of slow. And, um, I think somewhat delinquent and embracing these things. So they're gonna think, lose the ability to compete. We have to really go at it in a big way to >> bring it. Bring it all back. Just we've got an election coming up. There are a lot of concern following the last election about the potential of a I chatbots Twitter chat bots, deep fakes, technologies that obscure or alter reality. Are you worried about what's coming in the next year? And that that >> could never happen? Paul. We could never see anything deep fakes I'm quite worried about. We don't seem. I know there's some organizations working on how we would certify, you know, an image as being really But we're not there yet. My guess is, certainly by the time the election happens, we're going to have all sorts of political candidates saying things that they never really said through deep fakes and image manipulation. Scary? What do you think about the call to break up? Big check. What's your position on that? I think that sell a self inflicted wound. You know, we just saw, for example, that the automobile manufacturers decided to get together. Even though the federal government isn't asking for better mileage, they said, We'll do it. We'll work with you in union of states that are more advanced. If Big Tak had said, we're gonna work together to develop standards of ethical behavior and privacy and data and so on, they could've prevented some of this unless they change their attitude really quickly. I've seen some of it sales force. People are talking about the need for data standard data protection standards, I must say, change quickly. I think they're going to get legislation imposed and maybe get broken up. It's gonna take awhile. Depends on the next administration, but they're not being smart >> about it. You look it. I'm sure you see a lot of demos of advanced A I type technology over the last year, what is really impressed you. >> You know, I think the biggest advances have clearly been in image recognition looking the other day. It's a big problem with that is you need a lot of label data. It's one of the reasons why Google was able to identify cat photos on the Internet is we had a lot of labeled cat images and the Image net open source database. But the ability to start generating images to do synthetic label data, I think, could really make a big difference in how rapidly image recognition works. >> What even synthetic? I'm sorry >> where we would actually create. We wouldn't have to have somebody go around taking pictures of cats. We create a bunch of different cat photos, label them as cat photos have variations in them, you know, unless we have a lot of variation and images. That's one of the reasons why we can't use autonomous vehicles yet because images differ in the rain and the snow. And so we're gonna have to have synthetic snow synthetic rain to identify those images. So, you know, the GPU chip still realizes that's a pedestrian walking across there, even though it's kind of buzzed up right now. Just a little bit of various ation. The image can throw off the recognition altogether. Tom. Hey, thanks so much for coming in. The Cube is great to see you. We gotta go play Catch. You're welcome. Keep right. Everybody will be back from M I t CDO I Q In Cambridge, Massachusetts. Stable, aren't they? Paul Gillis, You're watching the Cube?

Published Date : Jul 31 2019

SUMMARY :

Brought to you by My co host, Tom Davenport, is here is the president's distinguished professor at Babson College. I I mean, you know, when we hit the singularity, Are you think we're going to Seymour of four that were maybe a eyes used as you know, if we gave humans the opportunity, they could do Maur than they're doing now But you know, the key point there is the machines the term, you know, paid the cow path. Well, and, you know, in the workplace and you know, we're all chatting and talking It's actually it's phenomenal to see reporters sitting in the White House waiting for a press conference. And where do you think that's gonna be three years from now? I think you know, in a very narrow sense in some cases, No, I don't because the Finn techs that you see thus far keep There are a lot of concern following the last election about the potential of a I chatbots you know, an image as being really But we're not there yet. I'm sure you see a lot of demos of advanced A But the ability to start generating images to do synthetic as cat photos have variations in them, you know, unless we have

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
McDonaldORGANIZATION

0.99+

Dave VolontePERSON

0.99+

Paul GillisPERSON

0.99+

Roy AmaraPERSON

0.99+

Paul GuillenPERSON

0.99+

Tom DavenportPERSON

0.99+

AmazonORGANIZATION

0.99+

TomPERSON

0.99+

SeymourPERSON

0.99+

700,000,000QUANTITY

0.99+

12 storesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Robert SolowPERSON

0.99+

PaulPERSON

0.99+

last yearDATE

0.99+

Silicon Angle MediaORGANIZATION

0.99+

Cambridge, MassachusettsLOCATION

0.99+

oneQUANTITY

0.99+

50 yearsQUANTITY

0.99+

U. S.LOCATION

0.99+

Babson CollegeORGANIZATION

0.99+

HuebelPERSON

0.99+

next yearDATE

0.99+

FedORGANIZATION

0.98+

fourQUANTITY

0.98+

DemocraticORGANIZATION

0.98+

more than halfQUANTITY

0.98+

M I. T.PERSON

0.98+

seventhQUANTITY

0.98+

2019DATE

0.98+

Nobel PrizeTITLE

0.97+

McAfeeORGANIZATION

0.97+

GreenspanPERSON

0.97+

TwitterORGANIZATION

0.96+

OneQUANTITY

0.96+

U. S.LOCATION

0.96+

one labQUANTITY

0.96+

RyanPERSON

0.95+

CatchTITLE

0.95+

this yearDATE

0.95+

last nightDATE

0.94+

Big TakORGANIZATION

0.87+

ProfessorPERSON

0.84+

Aye Aye AdvantageTITLE

0.84+

75QUANTITY

0.84+

Amazon GoORGANIZATION

0.81+

U.ORGANIZATION

0.78+

MaurPERSON

0.77+

trillionsQUANTITY

0.76+

NelsonORGANIZATION

0.73+

TamaraPERSON

0.71+

one of the reasonsQUANTITY

0.71+

White HouseORGANIZATION

0.69+

Big checkORGANIZATION

0.69+

LawTITLE

0.67+

three yearsQUANTITY

0.66+

M I t. CdoEVENT

0.66+

MPERSON

0.65+

MoorePERSON

0.59+

13th annualQUANTITY

0.58+

firstQUANTITY

0.57+

LastQUANTITY

0.54+

AyePERSON

0.52+

MIT CDOIQORGANIZATION

0.51+

M.PERSON

0.48+

FinnORGANIZATION

0.45+

CubeTITLE

0.41+