Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1
(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Nishihara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
35 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ant Group | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
32 GPUs | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
September | DATE | 0.99+ |
today | DATE | 0.99+ |
Moore's Law | TITLE | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
PyTorch | TITLE | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
second reason | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
UC Berkeley | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
OpenAI | ORGANIZATION | 0.99+ |
Anyscale | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ByteDance | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Asure | ORGANIZATION | 0.98+ |
one line | QUANTITY | 0.98+ |
one GPU | QUANTITY | 0.98+ |
ChatGPT | TITLE | 0.98+ |
TensorFlow | TITLE | 0.98+ |
last year | DATE | 0.98+ |
first bucket | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two layers | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Alipay | ORGANIZATION | 0.98+ |
Ray | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
Instacart | ORGANIZATION | 0.97+ |
Andy Sheahen, Dell Technologies & Marc Rouanne, DISH Wireless | MWC Barcelona 2023
>> (Narrator) The CUBE's live coverage is made possible by funding by Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Fira Barcelona. It's theCUBE live at MWC23 our third day of coverage of this great, huge event continues. Lisa Martin and Dave Nicholson here. We've got Dell and Dish here, we are going to be talking about what they're doing together. Andy Sheahen joins as global director of Telecom Cloud Core and Next Gen Ops at Dell. And Marc Rouanne, one of our alumni is back, EVP and Chief Network Officer at Dish Wireless. Welcome guys. >> Great to be here. >> (Both) Thank you. >> (Lisa) Great to have you. Mark, talk to us about what's going on at Dish wireless. Give us the update. >> Yeah so we've built a network from scratch in the US, that covered the US, we use a cloud base Cloud native, so from the bottom of the tower all the way to the internet uses cloud distributed cloud, emits it, so there are a lot of things about that. But it's unique, and now it's working, so we're starting to play with it and that's pretty cool. >> What's some of the proof points, proof in the pudding? >> Well, for us, first of all it was to do basic voice and data on a smartphone and for me the success would that you won't see the difference for a smartphone. That's base line. the next step is bringing this to the enterprise for their use case. So we've covered- now we have services for smartphones. We use our brand, Boost brand, and we are distributing that across the US. But as I said, the real good stuff is when you start to making you know the machines and all the data and the applications for the enterprise. >> Andy, how is Dell a facilitator of what Marc just described and the use cases and what their able to deliver? >> We're providing a number of the servers that are being used out in their radio access network. The virtual DU servers, we're also providing some bare metal orchestration capabilities to help automate the process of deploying all these hundreds and thousands of nodes out in the field. Both of these, the servers and the bare metal orchestra product are things that we developed in concert with Dish, working together to understand the way, the best way to automate, based on the tooling their using in other parts of their network, and we've been with you guys since day one, really. >> (Marc) Absolutely, yeah. >> Making each others solutions better the whole way. >> Marc, why Dell? >> So, the way the networks work is you have a cloud, and you have a distributed edge you need someone who understands the diversity of the edge in order to bring the cloud software to the edge, and Dell is the best there, you know, you can, we can ask them to mix and match accelerators, processors memory, it's very diverse distributed edge. We are building twenty thousands sides so you imagine the size and the complexity and Dell was the right partner for that. >> (Andy) Thank you. >> So you mentioned addressing enterprise leads, which is interesting because there's nothing that would prevent you from going after consumer wireless technically, right but it sounds like you have taken a look at the market and said "we're going to go after this segment of the market." >> (Marc) Yeah. >> At least for now. Are there significant differences between what an enterprise expects from a 5G network than, verses a consumer? >> Yeah. >> (Dave) They have higher expectations, maybe, number one I guess is, if my bill is 150 dollars a month I can have certain levels of expectations whereas a large enterprise the may be making a much more significant investment, are their expectations greater? >> (Marc) Yeah. >> Do you have a higher bar to get over? >> So first, I mean first we use our network for consumers, but for us it's an enterprise. That's the consumer segment, an enterprise. So we expose the network like we would to a car manufacturer, or to a distributor of goods of food and beverage. But what you expect when you are an enterprise, you expect, manage your services. You expect to control the goodness of your services, and for this you need to observe what's happening. Are you delivering the right service? What is the feedback from the enterprise users, and that's what we call the observability. We have a data centric network, so our enterprises are saying "Yeah connecting is enough, but show us how it works, and show us how we can learn from the data, improve, improve, and become more competitive." That's the big difference. >> So what you say Marc, are some of the outcomes you achieved working with Dell? TCO, ROI, CapX, OpX, what are some of the outcomes so far, that you've been able to accomplish? >> Yeah, so obviously we don't share our numbers, but we're very competitive. Both on the CapX and the OpX. And the second thing is that we are much faster in terms of innovation, you know one of the things that Telecorp would not do, was to tap into the IT industry. So we access to the silicon and we have access to the software and at a scale that none of the Telecorp could ever do and for us it's like "wow" and it's a very powerful industry and we've been driving the consist- it's a bit technical but all the silicone, the accelerators, the processors, the GPU, the TPUs and it's like wow. It's really a transformation. >> Andy, is there anything anagallis that you've dealt with in the past to the situation where you have this true core edge, environment where you have to instrument the devices that you provide to give that level of observation or observability, whatever the new word is, that we've invented for that. >> Yeah, yeah. >> I mean has there, is there anything- >> Yeah absolutely. >> Is this unprecedented? >> No, no not at all. I mean Dell's been really working at the edge since before the edge was called the edge right, we've been selling, our hardware and infrastructure out to retail shops, branch office locations, you know just smaller form factors outside of data centers for a very long time and so that's sort of the consistency from what we've been doing for 30 years to now the difference is the volume, the different number of permutations as Marc was saying. The different type of accelerator cards, the different SKUS of different server types, the sheer volume of nodes that you have in a nationwide wireless network. So the volumes are much different, the amount of data is much different, but the process is really the same. It's about having the infrastructure in the right place at the right time and being able to understand if it's working well or if it's not and it's not just about a red light or a green light but healthy and unhealthy conditions and predicting when the red lights going to come on. And we've been doing that for a while it's just a different scale, and a different level of complexity when you're trying to piece together all these different components from different vendors. >> So we talk a lot about ecosystem, and sometimes because of the desire to talk about the outcomes and what the end users, customers, really care about sometimes we will stop at the layer where say a Dell lives, and we'll see that as the sum total of the component when really, when you talk about a server that Dish is using that in and of itself is an ecosystem >> Yep, yeah >> (Dave) or there's an ecosystem behind it you just mentioned it, the kinds of components and the choices that you make when you optimize these devices determine how much value Dish, >> (Andy) Absolutely. >> Can get out of that. How deep are you on that hardware? I'm a knuckle dragging hardware guy. >> Deep, very deep, I mean just the number of permutations that were working through with Dish and other operators as well, different accelerator cards that we talked about, different techniques for timing obviously there's different SKUs with the silicon itself, different chip sets, different chips from different providers, all those things have to come together, and we build the basic foundation and then we also started working with our cloud partners Red Hat, Wind River, all these guys, VM Ware, of course and that's the next layer up, so you've got all the different hardware components, you've got the extraction layer, with your virtualization layer and or ubernetise layer and all of that stuff together has to be managed compatibility matrices that get very deep and very big, very quickly and that's really the foundational challenge we think of open ran is thinking all these different pieces are going to fit together and not just work today but work everyday as everything gets updated much more frequently than in the legacy world. >> So you care about those things, so we don't have to. >> That's right. >> That's the beauty of it. >> Yes. >> Well thank you. (laughter) >> You're welcome. >> I want to understand, you know some of the things that we've been talking about, every company is a data company, regardless of whether it's telco, it's a retailer, if it's my bank, it's my grocery store and they have to be able to use data as quickly as possible to make decisions. One of the things they've been talking here is the monetization of data, the monetization of the network. How do you, how does Dell help, like a Dish be able to achieve the monetization of their data. >> Well as Marc was saying before the enterprise use cases are what we are all kind of betting on for 5G, right? And enterprises expect to have access to data and to telemetry to do whatever use cases they want to execute in their particular industry, so you know, if it's a health care provider, if it's a factory, an agricultural provider that's leveraging this network, they need to get the data from the network, from the devices, they need to correlate it, in order to do things like automatically turn on a watering system at a certain time, right, they need to know the weather around make sure it's not too windy and you're going to waste a lot of water. All that has data, it's going to leverage data from the network, it's going to leverage data from devices, it's going to leverage data from applications and that's data that can be monetized. When you have all that data and it's all correlated there's value, inherit to it and you can even go onto a forward looking state where you can intelligently move workloads around, based on the data. Based on the clarity of the traffic of the network, where is the right place to put it, and even based on current pricing for things like on demand insists from cloud providers. So having all that data correlated allows any enterprise to make an intelligent decision about how to move a workload around a network and get the most efficient placing of that workload. >> Marc, Andy mentions things like data and networks and moving data across the networks. You have on your business card, Chief Network Officer, what potentially either keeps you up at night in terror or gets you very excited about the future of your network? What's out there in the frontier and what are those key obstacles that have to be overcome that you work with? >> Yeah, I think we have the network, we have the baseline, but we don't yet have the consumption that is easy by the enterprise, you know an enterprise likes to say "I have 4K camera, I connect it to my software." Click, click, right? And that's where we need to be so we're talking about it APIs that are so simple that they become a click and we engineers we have a tendency to want to explain but we should not, it should become a click. You know, and the phone revolution with the apps became those clicks, we have to do the same for the enterprise, for video, for surveillance, for analytics, it has to be clicks. >> While balancing flexibility, and agility of course because you know the folks who were fans of CLIs come in light interfaces, who hate gooeys it's because they feel they have the ability to go down to another level, so obviously that's a balancing act. >> But that's our job. >> Yeah. >> Our job is to hide the complexity, but of course there is complexity. It's like in the cloud, an emprise scaler, they manage complex things but it's successful if they hide it. >> (Dave) Yeah. >> It's the same. You know we have to be emprise scaler of connectivity but hide it. >> Yeah. >> So that people connect everything, right? >> Well it's Andy's servers, we're all magicians hiding it all. >> Yeah. >> It really is. >> It's like don't worry about it, just know, >> Let us do it. >> Sit down, we will serve you the meal. Don't worry how it's cooked. >> That's right, the enterprises want the outcome. >> (Dave) Yeah. >> They don't want to deal with that bottom layer. But it is tremendously complex and we want to take that on and make it better for the industry. >> That's critical. Marc I'd love to go back to you and just I know that you've been in telco for such a long time and here we are day three of MWC the name changed this year, from Mobile World Congress, reflecting mobilism isn't the only thing, obviously it was the catalyst, but what some of the things that you've heard at the event, maybe seen at the event that give you the confidence that the right players are here to help move Dish wireless forward, for example. >> You know this is the first, I've been here for decades it's the first time, and I'm a Chief Network Officer, first time we don't talk about the network. >> (Andy) Yeah. >> Isn't that surprising? People don't tell me about speed, or latency, they talk about consumption. Apps, you know videos surveillance, or analytics or it's, so I love that, because now we're starting to talk about how we can consume and monetize but that's the first time. We use to talk about gigabytes and this and that, none of that not once. >> What does that signify to you, in terms of the evolution? >> Well you know, we've seen that the demand for the healthcare, for the smart cities, has been here for a decade, proof of concepts for a decade but the consumption has been behind and for me this is the oldest team is waking up to we are going to make it easy, so that the consumption can take off. The demand is there, we have to serve it. And the fact that people are starting to say we hide the complexity that's our problem, but don't even mention it, I love it. >> Yep. Drop the mic. >> (Andy and Marc) Yeah, yeah. >> Andy last question for you, some of the things we know Dell has a big and verging presents in telco, we've had a chance to see the booth, see the cool things you guys are featuring there, Dave did a great tour of it, talk about some of the things you've heard and maybe even from customers at this event that demonstrate to you that Dell is going in the right direction with it's telco strategy. >> Yeah, I mean personally for me this has been an unbelievable event for Dell we've had tons and tons of customer meetings of course and the feedback we're getting is that the things we're bring to market whether it's infrablocks, or purposeful servers that are designed for the telecom network are what our customers need and have always wanted. We get a lot of wows, right? >> (Lisa) That's nice. >> "Wow we didn't know Dell was doing this, we had no idea." And the other part of it is that not everybody was sure that we were going to move as fast as we have so the speed in which we've been able to bring some of these things to market and part of that was working with Dish, you know a pioneer, to make sure we were building the right things and I think a lot of the customers that we talked to really appreciate the fact that we're doing it with the industry, >> (Lisa) Yeah. >> You know, not at the industry and that comes across in the way they are responding and what their talking to us about now. >> And that came across in the interview that you just did. Thank you both for joining Dave and me. >> Thank you >> Talking about what Dell and Dish are doing together the proof is in the pudding, and you did a great job at explaining that, thanks guys, we appreciate it. >> Thank you. >> All right, our pleasure. For our guest and for Dave Nicholson, I'm Lisa Martin, you're watching theCUBE live from MWC 23 day three. We will be back with our next guest, so don't go anywhere. (upbeat music)
SUMMARY :
that drive human progress. we are going to be talking about Mark, talk to us about what's that covered the US, we use a cloud base and all the data and the and the bare metal orchestra product solutions better the whole way. and Dell is the best at the market and said between what an enterprise and for this you need to but all the silicone, the instrument the devices and so that's sort of the consistency from deep are you on that hardware? and that's the next So you care about those Well thank you. One of the things and get the most efficient the future of your network? You know, and the phone and agility of course It's like in the cloud, an emprise scaler, It's the same. Well it's Andy's Sit down, we will serve you the meal. That's right, the and make it better for the industry. that the right players are here to help it's the first time, and but that's the first easy, so that the consumption some of the things we know and the feedback we're getting is that so the speed in which You know, not at the industry And that came across in the the proof is in the pudding, We will be back with our next
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Marc Rouanne | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Andy Sheahen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Telecorp | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Dish | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
DISH Wireless | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Dish wireless | ORGANIZATION | 0.98+ |
Lisa | PERSON | 0.98+ |
MWC | EVENT | 0.98+ |
third day | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Next Gen Ops | ORGANIZATION | 0.97+ |
TCO | ORGANIZATION | 0.97+ |
Dish Wireless | ORGANIZATION | 0.97+ |
CapX | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
Boost | ORGANIZATION | 0.95+ |
150 dollars a month | QUANTITY | 0.94+ |
OpX | ORGANIZATION | 0.92+ |
Telecom Cloud Core | ORGANIZATION | 0.91+ |
thousands | QUANTITY | 0.9+ |
ROI | ORGANIZATION | 0.9+ |
tons and tons of customer | QUANTITY | 0.86+ |
Manya Rastogi, Dell Technologies & Abdel Bagegni, Telecom Infra Project | MWC Barcelona 2023
>> TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Spain, everybody. We're here at the Theater Live and MWC 23. You're watching theCUBE's Continuous Coverage. This is day two. I'm Dave Vellante with my co-host, Dave Nicholson. Lisa Martin is also in the house. John Furrier out of our Palo Alto studio covering all the news. Check out silicon angle.com. Okay, we're going to dig into the core infrastructure here. We're going to talk a little bit about servers. Manya Rastogi is here. She's in technical marketing at Dell Technologies. And Abdel Bagegni is technical program manager at the Telecom Infra Project. Folks, welcome to theCUBE. Good to see you. >> Thank you. >> Abdel, what is the Telecom Infras Project? Explain to our audience. >> Yeah. So the Telecom Infra Project is a US based non-profit organization community that brings together different participants, suppliers, vendors, operators SI's together to accelerate the adoption of open RAN and open interface solutions across the globe. >> Okay. So that's the mission is open RAN adoption. And then how, when was it formed? Give us the background and some of the, some of the milestones so far. >> Yeah. So the telecom infra project was established five years ago from different vendor leaders and operators across the globe. And then the mission was to bring different players in to work together to accelerate the adoption of, of open RAN. Now open RAN has a lot of potential and opportunities, but in the same time there's challenges that we work together as a community to facilitate those challenges and overcome those barriers. >> And we've been covering all week just the disaggregation of the network. And you know, we've seen this movie sort of before playing out now in, in telecom. And Manya, this is obviously a compute intensive environment. We were at the Dell booth earlier this morning poking around, beautiful booth, lots of servers. Tell us what your angle is here in this marketplace. >> Yeah, so I would just like to say that Dell is kind of leading or accelerating the innovation at the telecom edge with all these ruggedized servers that we are offering. So just continuing the mission, like Abdel just mentioned for the open RAN, that's where a lot of focus will be from these servers will be, so XR 8000, it's it's going to be one of the star servers for telecom with, you know, offering various workloads. So it can be rerun, open run, multi access, edge compute. And it has all these different features with itself and the, if we, we can talk more about the performance gains, how it is based on the Intel CPUs and just try to solve the purpose like along with various vendors, the whole ecosystem solve this challenge for the open RAN. >> So Manya mentioned some of those infrastructure parts. Does and do, do you say TIP or T-I-P for short? >> Abdel: We say TIP. >> TIP. >> Abdel: T-I-P is fine as well. >> Does, does, does TIP or T-I-P have a certification process or a, or a set of guidelines that someone like Dell would either adhere to or follow to be sort of TIP certified? What does that look like? >> Yeah, of course. So what TIP does is TIP accredits what solutions that actually work in a real commercial grade environment. So what we do is we bring the different players together to come up with the most efficient optimized solution. And then it goes through a process that the community sets the, the, the criteria for and accepts. And then once this is accredited it goes into TIP exchange for other operators and the participants and the industry to adopt. So it's a well structured process and it's everything about how we orchestrate the industry to come together and set those requirements and and guidelines. Everything starts with a use case from the beginning. It's based on operators requirements, use cases and then those use cases will be translated into a solution that the industry will approve. >> So when you say operator, I can think of that sort of traditionally as the customer side of things versus the vendor side of things. Typically when organizations get together like TIP, the operator customer side is seeking a couple of things. They want perfect substitutes in all categories so that they could grind vendors down from a price perspective but they also want amazing innovation. How do you, how do you deliver both? >> Yeah, I mean that's an excellent question. We be pragmatic and we bring all players in one table to discuss. MNO's want this, vendors can provide a certain level and we bring them together and they discuss and come up with something that can be deployed today and future proof for the future. >> So I've been an enterprise technology observer for a long time and, you know, I saw the, the attempt to take network function virtualization which never really made much of an impact, but it was a it was the beginning of the enterprise players really getting into this market. And then I would see companies, whether it was Dell or HPE or Cisco, they'd take an X 86 server, put a cool name on it, edge something, and throw it over the fence and that didn't work so well. Now it's like, Manya. We're starting to get serious. You're building relationships. >> Manya: Totally. >> I mentioned we were at the Dell booth you're actually building purpose built systems now for this, this segment. Tell us what's different about this market and the products that you're developing for this market than say the commercial enterprise. >> So you are absolutely right, like, you know, kind of thinking about the journey, there has been a lot of, it has been going for a long time for all these improvements and towards going more open disaggregated and overall that kind of environment and what Dell brings together with our various partners and particularly if you talk about Intel. So these servers are powered by the players four gen intel beyond processors. And so what Intel is doing right now is providing us with great accelerators like vRAN Boost. So it increases performance like doubles what it was able to do before. And power efficiency, it has been an issue for a long, long time and it still continues but there is some improvement. For example 20% reduction overall with the power savings. So that's a step forward in that direction. And then we have done some of our like own testing as well with these servers and continuing that, you know it's not just telecom but also going towards Edge or inferencing like all these comes together not just X 30,000 but for example XR 56 10, 70, 76 20. So these are three servers which combines together to like form telecom and Edge and covers altogether. So that's what it is. >> Great, thank you. So Abdel, I mean I think generally people agree that in the fullness of time all radio access networks are going to be open, right? It's just a matter of okay, how do we get there? How do we make sure that it has the same, you know, quality of service characteristics. So where are we on on that, that journey from your perspective? And, and maybe you could project what, what it's going to look like over this decade. 'Cause it's going to take, you know, years. >> It's going to take a bit of time to mature and be a kind of a plug and play different units together. I think there was a lot, there was a, was a bit of over-promising in a few, in the last few years on the acceleration of open RAN deployment. That, well, a TIP is trying to do is trying to realize the pragmatic approach of the open run deployment. Now we know the innovation cannot happen when you have a kind of closed interfaces when you allow small players to be within the market and bring the value to, to the RAN areas. This is where the innovation happens. I think what would happen on the RAN side of things is that it would be driven by use cases and the operators. And the minute that the operators are no longer can depend on the closed interface vendors because there's use cases that fulfill that are requires some open RAN functionality, be the, the rig or the SMO layers and the different configurations of the rUSE getting the servers to the due side of things. This kind of modular scalability on this layer is when the RAN will, the Open RAN, would boost. This would happen probably, yeah. >> Go ahead. >> Yeah, it would happen in, in the next few years. Not next year or the year after but definitely something within the four to five years from now. >> I think it does feel like it's a second half of the decade and you feel like the, the the RAN intelligent controller is going to be a catalyst to actually sort of force the world into this open environment. >> Let's say that the Rick and the promises that were given to, to the sun 10 years ago, the Rick is realizing it and the closed RAN vendors are developing a lot on the Rick side more than the other parts of the, of the open RAN. So it will be a catalyst that would drive the innovation of open RAN, but only time will tell. >> And there are some naysayers, I mean I've seen some you know, very, very few, but I've seen some works that, oh the economics aren't there. It'll, it'll never get there. What, what do you, what do you say to that? That, that it won't ever, open RAN won't ever be as cost effective as you know, closed networks. >> Open RAN will open innovations that small players would have the opportunity to contribute to the, to the RAN space. This opportunity is not given to small players today. Open RAN provides this kind of opportunity and given that it's a path for innovation, then I would say that, you know, different perspectives some people are making sure that, you know the status quo is the way forward. But it would certainly put barriers on on innovation and this is not the way forward. >> Yeah. You can't protect the past in the future. My own personal opinion is, is that it doesn't have to be comparable from a, from a TCO perspective it can be close enough. It's the innovative, same thing with like you watch the, the, the adoption of Cloud. >> Exactly. >> Like cloud was more expensive it's always more expensive to rent, but people seem to be doing public Cloud, you know, because of the the innovation capabilities and the developer capabilities. Is that a fair analogy in this space, do you think? >> I mean this is what all technologies happens. >> Yeah. >> Right? It starts with a quite costly and then the the cost will start dropping down. I mean the, the cost of, of a megabyte two decades ago is probably higher than what it costly terabyte. So this is how technology evolves and it's any kind of comparison, either copper or even the old generation, the legacy generations could be a, a valid comparison. However, they need to be at a market demand for something like that. And I think the use cases today with what the industry is is looking for have that kind of opportunity to pull this kind of demand. But, but again, it needs to go work close by the what happens in the technology space, be it, you know we always talk about when we, we used to talk about 5G, there was a lot of hypes going on there. But I think once it realized in, in a pragmatic, in a in a real life situation, the minutes that governments decide to go for autonomous vehicles, then you would have limitations on the current closed RAN infrastructures and you would definitely need something to to top it up on the- >> I mean, 5G needs open RAN, I mean that's, you know not going to happen without it. >> Exactly. >> Yeah, yeah. But, but what is, but what would you say the most significant friction is between here and the open RAN nirvana? What are, what are the real hurdles that need to be overcome? There's obviously just the, I don't want to change we've been doing this the same way forever, but what what are the, what are the real, the legitimate concerns that people have when we start talking about open RAN? >> So I think from a technology perspective it will be solved. All of the tech, I mean there's smart engineers in the world today that will fix, you know these kind of problems and all of the interability, interruptability issues and, and all of that. I think it's about the mindset, the, the interfaces between the legacy core and RAN has been became more fluid today. We don't have that kind of a hard line between these kind of different aspects. We have the, the MEC coming closer to the RAN, we have the RAN coming closer to the Core, and we have the service based architectures in the Core. So these kind of things make it needs a paradigm shift between how operators that would need to tackle the open RAN space. >> Are there specific deployment requirements for open RAN that you can speak to from your perspective? >> For sure and going in this direction, like, you know evolution with the technology and how different players are coming together. Like that's something I wanted to comment from the previous question. And that's where like, you know these servers that Dell is offering right now. Specific functionality requirements, for example, it's it's a small server, it's short depth just 430 millimeters of depth and it can fit anywhere. So things like small form factor, it's it's crucial because if you, it can replace like multiple servers 10 years ago with just one server and you can place it like near a base band unit or to a cell site on top of a roof wherever. Like, you know, if it's a small company and you need this kind of 5G connection it kind of solves that challenge with this server. And then there are various things like, you know increasing thermals for example temperatures. It is classified like, you know kind of compliant with the negative 5 to 55 degree Celsius. And then we are also moving towards, for example negative 20 to 65 degree Celsius. Which is, which is kind of great because in situations where, which are out of our hands and you need specific thermals for those situations that's where it can solve that problem. >> Are those, are those statistics in those measurements different than the old NEB's standards, network equipment building standards? Or are they, are they in line with that? >> It is, it is a next step. Like so most of our servers that we have right now are negative five to five degree Celsius, for especially the extremely rugged server series and this one XR 8,000 which is focused for the, it's telecom inspired so it's focused on those customers. So we are trying to come up like go a step ahead and also like offering this additional temperatures testing and yeah compliance. So, so it is. >> Awesome. So we, I said we were at the booth early today. Looks like some good traffic people poking around at different, you know, innovations you got going. Some of the private network stuff is kind of cool. I'm like how much does that cost? I think I might like one of those, you know, but- >> [Private 5G home network. >> Right? Why not? Guys, great to have you on the show. Thanks so much for sharing. Appreciate it. >> Thank you. >> Thank you so much. >> Okay. For Dave Nicholson and Lisa Martin this is Dave Vellante, theCUBE's coverage. MWC 23 live from the Fida in Barcelona. We'll be right back. (outro music)
SUMMARY :
that drive human progress. Lisa Martin is also in the house. Explain to our audience. solutions across the globe. some of the milestones so far. and operators across the globe. of the network. So just continuing the mission, Does and do, do you say the industry to adopt. as the customer side and future proof for the future. the attempt to take network and the products that you're developing by the players four gen intel has the same, you know, quality and the different configurations of in, in the next few years. of the decade and you feel like the, the and the promises that were given to, oh the economics aren't there. the opportunity to contribute It's the innovative, same thing with like and the developer capabilities. I mean this is what by the what happens in the RAN, I mean that's, you know between here and the open RAN in the world today that will fix, you know from the previous question. for especially the extremely Some of the private network Guys, great to have you on the show. MWC 23 live from the Fida in Barcelona.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Manya Rastogi | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Abdel Bagegni | PERSON | 0.99+ |
Manya | PERSON | 0.99+ |
Abdel | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
65 degree Celsius | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one table | QUANTITY | 0.98+ |
MWC 23 | EVENT | 0.98+ |
55 degree Celsius | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
five degree Celsius | QUANTITY | 0.98+ |
Telecom Infra Project | ORGANIZATION | 0.98+ |
Telecom Infra Project | ORGANIZATION | 0.98+ |
two decades ago | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
TheCUBE | ORGANIZATION | 0.97+ |
10 years ago | DATE | 0.97+ |
430 millimeters | QUANTITY | 0.97+ |
four | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
five | QUANTITY | 0.95+ |
5 | QUANTITY | 0.95+ |
early today | DATE | 0.93+ |
XR 56 10 | COMMERCIAL_ITEM | 0.92+ |
Barcelona | LOCATION | 0.92+ |
XR 8000 | COMMERCIAL_ITEM | 0.92+ |
next few years | DATE | 0.89+ |
rUSE | TITLE | 0.89+ |
20 | COMMERCIAL_ITEM | 0.89+ |
day two | QUANTITY | 0.88+ |
three servers | QUANTITY | 0.88+ |
five years | QUANTITY | 0.87+ |
Fida | LOCATION | 0.86+ |
intel | ORGANIZATION | 0.86+ |
earlier this morning | DATE | 0.86+ |
10 years ago | DATE | 0.85+ |
Theater Live | LOCATION | 0.83+ |
MWC Barcelona 2023 | EVENT | 0.82+ |
silicon angle.com | OTHER | 0.81+ |
Telecom Infras Project | ORGANIZATION | 0.81+ |
sun | DATE | 0.8+ |
second half | QUANTITY | 0.8+ |
5G | ORGANIZATION | 0.79+ |
NEB | ORGANIZATION | 0.78+ |
Rick | ORGANIZATION | 0.78+ |
XR 8,000 | COMMERCIAL_ITEM | 0.77+ |
MNO | ORGANIZATION | 0.77+ |
X 30,000 | OTHER | 0.72+ |
TCO | ORGANIZATION | 0.71+ |
MWC 23 | LOCATION | 0.66+ |
RAN | TITLE | 0.65+ |
of | DATE | 0.61+ |
86 | COMMERCIAL_ITEM | 0.6+ |
SiliconANGLE News | Intel Accelerates 5G Network Virtualization
(energetic music) >> Welcome to the Silicon Angle News update Mobile World Congress theCUBE coverage live on the floor for four days. I'm John Furrier, in the studio here. Dave Vellante, Lisa Martin onsite. Intel in the news, Intel accelerates 5G network virtualization with radio access network boost for Xeon processors. Intel, well known for power and computing, they today announced their integrated virtual radio access network into its latest fourth gen Intel Xeon system on a chip. This move will help network operators gear up their efforts to deliver Cloud native features for next generation 5G core and edge networks. This announcement came today at MWC, formerly knows Mobile World Congress. In Barcelona, Intel is taking the latest step in its mission to virtualize the world's networks, including Core, Open RAN and Edge. Network virtualization is the key capability for communication service providers as they migrate from fixed function hardware to programmable software defined platforms. This provides greater agility and greater cost efficiency. According to Intel, this is the demand for agile, high performance, scalable networks requiring adoption. Fully virtualized software based platforms run on general purpose processors. Intel believes that network operators need to accelerate network virtualization to get the most out of these new architectures, and that's where it can be made its mark. With Intel vRAN Boost, it delivers twice the capability and capacity gains over its previous generation of silicon with the same power envelope with 20% in power savings that results from an integrated acceleration. In addition, Intel announced new infrastructure power manager for 5G core reference software that's designed to work with vRAN Boost. Intel also showcased its new Intel Converged Edge media platform designed to deliver multiple video services from a shared multi-tenant architecture. The platform leverages Cloud native scalability to respond to the shifting demands. Lastly, Intel announced a range of Agilex 7 Field Programmable Gate Arrays and eASIC N5X structured applications specific integrated circuits designed for individual cloud communications and embedded applications. Intel is targeting the power consumption which is energy and more horsepower for chips, which is going to power the industrial internet edge. That's going to be Cloud native. Big news happening at Mobile World Congress. theCUBE is there. Go to siliconangle.com for all the news and special report and live feed on theCUBE.net. (energetic music)
SUMMARY :
Intel in the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Mobile World Congress | EVENT | 0.98+ |
twice | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
four days | QUANTITY | 0.98+ |
fourth gen | QUANTITY | 0.96+ |
theCUBE.net | OTHER | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.86+ |
MWC | EVENT | 0.84+ |
vRAN Boost | TITLE | 0.82+ |
Agilex | TITLE | 0.78+ |
Silicon Angle | ORGANIZATION | 0.77+ |
7 Field Programmable | COMMERCIAL_ITEM | 0.76+ |
SiliconANGLE News | ORGANIZATION | 0.76+ |
eASIC | TITLE | 0.75+ |
theCUBE | ORGANIZATION | 0.63+ |
N5X | COMMERCIAL_ITEM | 0.62+ |
5G | QUANTITY | 0.55+ |
Gate Arrays | OTHER | 0.41+ |
Jen Huffstetler, Intel | HPE Discover 2022
>> Announcer: theCube presents HPE Discover 2022 brought to you by HPE. >> Hello and welcome back to theCube's continuous coverage HPE Discover 2022 and from Las Vegas the formerly Sands Convention Center now Venetian, John Furrier and Dave Vellante here were excited to welcome in Jen Huffstetler. Who's the Chief product Sustainability Officer at Intel Jen, welcome to theCube thanks for coming on. >> Thank you very much for having me. >> You're really welcome. So you dial back I don't know, the last decade and nobody really cared about it but some people gave it lip service but corporations generally weren't as in tune, what's changed? Why has it become so top of mind? >> I think in the last year we've noticed as we all were working from home that we had a greater appreciation for the balance in our lives and the impact that climate change was having on the world. So I think across the globe there's regulations industry and even personally, everyone is really starting to think about this a little more and corporations specifically are trying to figure out how are they going to continue to do business in these new regulated environments. >> And IT leaders generally weren't in tune cause they weren't paying the power bill for years it was the facilities people, but then they started to come together. How should leaders in technology, business tech leaders, IT leaders, CIOs, how should they be thinking about their sustainability goals? >> Yeah, I think for IT leaders specifically they really want to be looking at the footprint of their overall infrastructure. So whether that is their on-prem data center, their cloud instances, what can they do to maximize the resources and lower the footprint that they contribute to their company's overall footprint. So IT really has a critical role to play I think because as you'll find in IT, the carbon footprint of the data center of those products in use is actually it's fairly significant. So having a focus there will be key. >> You know compute has always been one of those things where, you know Intel's been makes chips so that, you know heat is important in compute. What is Intel's current goals? Give us an update on where you guys are at. What's the ideal goal in the long term? Where are you now? You guys always had a focus on this for a long, long time. Where are we now? Cause I won't say the goalpost of changed, they're changing the definitions of what this means. What's the current state of Intel's carbon footprint and overall goals? >> Yeah, no thanks for asking. As you mentioned, we've been invested in lowering our environmental footprint for decades in fact, without action otherwise, you know we've already lowered our carbon footprint by 75%. So we're really in that last mile. And that is why when we recently announced a very ambitious goal Net-Zero 2040 for our scope one and two for manufacturing operations, this is really an industry leading goal. And partly because the technology doesn't even exist, right? For the chemistries and for making the silicon into the sand into, you know, computer chips yet. And so by taking this bold goal, we're going to be able to lead the industry, partner with academia, partner with consortia, and that drive is going to have ripple effects across the industry and all of the components in semiconductors. >> Is there a changing definition of Net-Zero? What that means, cause some people say they're Net-Zero and maybe in one area they might be but maybe holistically across the company as it becomes more of a broader mandate society, employees, partners, Wall Street are all putting pressure on companies. Is the Net-Zero conversation changed a little bit or what's your view on that? >> I think we definitely see it changing with changing regulations like those coming forth from the SEC here in the US and in Europe. Net-Zero can't just be lip service anymore right? It really has to be real reductions on your footprint. And we say then otherwise and even including in our supply chain goals what we've taken new goals to reduce, but our operations are growing. So I think everybody is going through this realization that you know, with the growth, how do we keep it lower than it would've been otherwise, keep focusing on those reductions and have not just renewable credits that could have been bought in one location and applied to a different geographical location but real credible offsets for where the the products manufactured or the computes deployed. >> Jen, when you talk about you've reduced already by 75% you're on that last mile. We listened to Pat Gelsinger very closely up until recently he was the number one most frequently had on theCube guest. He's been busy I guess. But as you apply that discipline to where you've been, your existing business and now Pat's laid out this plan to increase the Foundry business how does that affect your... Are you able to carry through that reduction to, you know, the new foundries? Do you have to rethink that? How does that play in? >> Certainly, well, the Foundry expansion of our business with IBM 2.0 is going to include the existing factories that already have the benefit of those decades of investment and focus. And then, you know we have clear goals for our new factories in Ohio, in Europe to achieve goals as well. That's part of the overall plan for Net-Zero 2040. It's inclusive of our expansion into Foundry which means that many, many many more customers are going to be able to benefit from the leadership that Intel has here. And then as we onboard acquisitions as any company does we need to look at the footprint of the acquisition and see what we can do to align it with our overall goals. >> Yeah so sustainable IT I don't know for some reason was always an area of interest to me. And when we first started, even before I met you, John we worked with PG&E to help companies get rebates for installing technologies that would reduce their carbon footprint. >> Jen: Very forward thinking. >> And it was a hard thing to get, you know, but compute was the big deal. And there were technologies and I remember virtualization at the time was one and we would go in and explain to the PG&E engineers how that all worked. Cause they had metrics and that they wanted to see, but anyway, so virtualization was clearly one factor. What are the technologies today that people should be paying, flash storage was another one. >> John: AI's going to have a big impact. >> Reduce the spinning disk, but what are the ones today that are going to have an impact? >> Yeah, no, that's a great question. We like to think of the built in acceleration that we have including some of the early acceleration for virtualization technologies as foundational. So built in accelerated compute is green compute and it allows you to maximize the utilization of the transistors that you already have deployed in your data center. This compute is sitting there and it is ready to be used. What matters most is what you were talking about, John that real world workload performance. And it's not just you know, a lot of specsmanship around synthetic benchmarks, but AI performance with the built in acceleration that we have in Xeon processors with the Intel DL Boost, we're able to achieve four X, the AI performance per Watts without you know, doing that otherwise. You think about the consolidation you were talking about that happened with virtualization. You're basically effectively doing the same thing with these built in accelerators that we have continued to add over time and have even more coming in our Sapphire Generation. >> And you call that green compute? Or what does that mean, green compute? >> Well, you are greening your compute. >> John: Okay got it. >> By increasing utilization of your resources. If you're able to deploy AI, utilize the telemetry within the CPU that already exists. We have customers KDDI in Japan has a great Proofpoint that they already announced on their 5G data center, lowered their data center power by 20%. That is real bottom line impact as well as carbon footprint impact by utilizing all of those built in capabilities. So, yeah. >> We've heard some stories earlier in the event here at Discover where there was some cooling innovations that was powering moving the heat to power towns and cities. So you start to see, and you guys have been following this data center and been part of the whole, okay and hot climates, you have cold climates, but there's new ways to recycle energy where's that cause that sounds very Sci-Fi to me that oh yeah, the whole town runs on the data center exhaust. So there's now systems thinking around compute. What's your reaction to that? What's the current view on re-engineering a system to take advantage of that energy or recycling? >> I think when we look at our vision of sustainable compute over this horizon it's going to be required, right? We know that compute helps to solve society's challenges and the demand for it is not going away. So how do we take new innovations looking at a systems level as compute gets further deployed at the edge, how do we make it efficient? How do we ensure that that compute can be deployed where there is air pollution, right? So some of these technologies that you have they not only enable reuse but they also enable some you know, closing in of the solution to make it more robust for edge deployments. It'll allow you to place your data center wherever you need it. It no longer needs to reside in one place. And then that's going to allow you to have those energy reuse benefits either into district heating if you're in, you know Northern Europe or there's examples with folks putting greenhouses right next to a data center to start growing food in what we're previously food deserts. So I don't think it's science fiction. It is how we need to rethink as a society. To utilize everything we have, the tools at our hand. >> There's a commercial on the radio, on the East Coast anyway, I don't know if you guys have heard of it, it's like, "What's your one thing?" And the gentleman comes on, he talks about things that you can do to help the environment. And he says, "What's your one thing?" So what's the one thing or maybe it's not just one that IT managers should be doing to affect carbon footprint? >> The one thing to affect their carbon footprint, there are so many things. >> Dave: Two, three, tell me. >> I think if I was going to pick the one most impactful thing that they could do in their infrastructure is it's back to John's comment. It's imagine if the world deployed AI, all the benefits not only in business outcomes, you know the revenue, lowering the TCO, but also lowering the footprint. So I think that's the one thing they could do. If I could throw in a baby second, it would be really consider how you get renewable energy into your computing ecosystem. And then you know, at Intel, when we're 80% renewable power, our processors are inherently low carbon because of all the work that we've done others have less than 10% renewable energy. So you want to look for products that have low carbon by design, any Intel based system and where you can get renewables from your grid to ask for it, run your workload there. And even the next step to get to sustainable computing it's going to take everyone, including every enterprise to think differently and really you know, consider what would it look like to bring renewables onto my site? If I don't have access through my local utility and many customers are really starting to evaluate that. >> Well Jen its great to have you on theCube. Great insight into the current state of the art of sustainability and carbon footprint. My final question for you is more about the talent out there. The younger generation coming in I'll say the pressure, people want to work for a company that's mission driven we know that, the Wall Street impact is going to be financial business model and then save the planet kind of pressure. So there's a lot of talent coming in. Is there awareness at the university level? Is there a course where can, do people get degrees in sustainability? There's a lot of people who want to come into this field what are some of the talent backgrounds of people learning or who might want to be in this field? What would you recommend? How would you describe how to onboard into the career if they want to contribute? What are some of those factors? Cause it's not new, new, but it's going to be globally aware. >> Yeah well there certainly are degrees with focuses on sustainability maybe to look at holistically at the enterprise, but where I think the globe is really going to benefit, we didn't really talk about the software inefficiency. And as we delivered more and more compute over the last few decades, basically the programming languages got more inefficient. So there's at least 35% inefficiency in the software. So being a software engineer, even if you're not an AI engineer. So AI would probably be the highest impact being a software engineer to focus on building new applications that are going to be efficient applications that they're well utilizing the transistor that they're not leaving zombie you know, services running that aren't being utilized. So I actually think-- >> So we got a program in assembly? (all laughing) >> (indistinct), would get really offended. >> Get machine language. I have to throw that in sorry. >> Maybe not that bad. (all laughing) >> That's funny, just a joke. But the question is what's my career path. What's a hot career in this area? Sustainability, AI totally see that. Anything else, any other career opportunities you see or hot jobs or hot areas to work on? >> Yeah, I mean, just really, I think it takes every architect, every engineer to think differently about their design, whether it's the design of a building or the design of a processor or a motherboard we have a whole low carbon architecture, you know, set of actions that are we're underway that will take to the ecosystem. So it could really span from any engineering discipline I think. But it's a mindset with which you approach that customer problem. >> John: That system thinking, yeah. >> Yeah sustainability designed in. Jen thanks so much for coming back in theCube, coming on theCube. It's great to have you. >> Thank you. >> All right. Dave Vellante for John Furrier, we're sustaining theCube. We're winding down day three, HPE Discover 2022. We'll be right back. (upbeat music)
SUMMARY :
brought to you by HPE. the formerly Sands Convention I don't know, the last decade and the impact that climate but then they started to come together. and lower the footprint What's the ideal goal in the long term? into the sand into, you but maybe holistically across the company that you know, with the growth, to where you've been, that already have the benefit to help companies get rebates at the time was one and it is ready to be used. the CPU that already exists. and been part of the whole, And then that's going to allow you And the gentleman comes on, The one thing to affect And even the next step to to have you on theCube. that are going to be would get really offended. I have to throw that in sorry. Maybe not that bad. But the question is what's my career path. or the design of a It's great to have you. Dave Vellante for John Furrier,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jen Huffstetler | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
PG&E | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Jen | PERSON | 0.99+ |
SEC | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Northern Europe | LOCATION | 0.99+ |
one factor | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
Pat | PERSON | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
one location | QUANTITY | 0.98+ |
20% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Net-Zero | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.96+ |
DL Boost | COMMERCIAL_ITEM | 0.96+ |
last decade | DATE | 0.95+ |
today | DATE | 0.93+ |
decades | QUANTITY | 0.92+ |
day three | QUANTITY | 0.9+ |
one area | QUANTITY | 0.9+ |
East Coast | LOCATION | 0.9+ |
KDDI | ORGANIZATION | 0.89+ |
Discover | ORGANIZATION | 0.88+ |
less than 10% renewable | QUANTITY | 0.86+ |
Wall Street | LOCATION | 0.86+ |
Sands Convention Center | LOCATION | 0.84+ |
theCube | ORGANIZATION | 0.83+ |
four X | QUANTITY | 0.82+ |
Wall | ORGANIZATION | 0.82+ |
least 35% | QUANTITY | 0.75+ |
Chief | PERSON | 0.75+ |
IBM 2.0 | ORGANIZATION | 0.74+ |
Sustainability Officer | PERSON | 0.72+ |
last few decades | DATE | 0.69+ |
second | QUANTITY | 0.63+ |
Net-Zero 2040 | TITLE | 0.62+ |
Generation | COMMERCIAL_ITEM | 0.6+ |
HPE Discover 2022 | COMMERCIAL_ITEM | 0.55+ |
2022 | COMMERCIAL_ITEM | 0.55+ |
every engineer | QUANTITY | 0.54+ |
5G | QUANTITY | 0.54+ |
-Zero | OTHER | 0.54+ |
HPE | COMMERCIAL_ITEM | 0.48+ |
Street | LOCATION | 0.47+ |
Changing the Game for Cloud Networking | Pluribus Networks
>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.
SUMMARY :
And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donnie | PERSON | 0.99+ |
Bob Liberte | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alessandra Burberry | PERSON | 0.99+ |
Sandra | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pete Bloomberg | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Alexandra | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Pete Lummus | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob LA Liberte | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Alessandra Bobby | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Bluefield | ORGANIZATION | 0.99+ |
NetApps | ORGANIZATION | 0.99+ |
demand@thekey.net | OTHER | 0.99+ |
20% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
a year | QUANTITY | 0.99+ |
March 21st | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
www.pluribusnetworks.com/e | OTHER | 0.99+ |
Tyco | ORGANIZATION | 0.99+ |
late April | DATE | 0.99+ |
Doka | TITLE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
two services | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second aspect | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
www.pluribusnetworks.com | OTHER | 0.99+ |
Pete | PERSON | 0.99+ |
last year | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Peter Adderton, Mobile X Global, Inc. & Nicolas Girard, OXIO | Cloud City Live 2021
>> Okay. We're back here. theCube and all the action here in Mobile World Congress, cloud city, I'm John ferry, host of the cube. We've got a great remote interviews. Of course, it's a hybrid event here in the cube. And of course, cloud city's bringing all the physical face-to-face and we're going to get the remote interviews. Peter Adderton, founder, chairman, CEO of Mobile X Global. Nicholas Gerrard, founder and CEO of OxyGo. Gentlemen, thank you for coming in remotely onto the cube here in the middle of cloud city. You missed Bon Jovi last night, he was awesome. The little acoustic unplugged and all the action. Thanks for coming on. >> Yeah, thanks for having us. >> All right, Peter and Nicholas, if you don't mind, just take a quick 30 seconds to set the table on what you guys do, your business and your focus here at Mobile World Congress. >> So I'll jump in quickly. Being the Australian, I'll go first, but just quick by way of background, I founded a company called Boost Mobile, which is one of the, is now the fourth largest mobile brand in, in America. And I spent a lot of time managing effort in that, in that space and now launching Mobile X, which is kind of the first cloud AI platform that we're going to build for mobile. >> Awesome. Nicholas. >> So I'm a founder of a company called, Ox Fuel where we do is basically a telecommunity service platform for brands to basically incorporate telecom as part of their services and learn from their customers through what we call a telecom business intelligence. So basically making sense of the telecom data to improve their business across retail, financial services or in-demand economy. >> Awesome. Well, thanks for the setup. Peter, I want to ask you first, if you don't mind, the business models in the telecom area is really becoming, not just operate, but build and build new software enabled software defined just cloud-based software. And this has been a change in mindset, not so much a change so much in the actual topologies per se, or the actual investments, but as a change in personnel. What's your take on this whole cloud powering the change in the future of telco? >> Well, I think you've got to look at where the telcos have come from in order to understand where they're going in the future. And where they've come from is basically using other people's technology to try to create a differentiation. And I think that that's the struggle that they're going to have. They talk about wanting to convert themselves from telcos into techcos. I just think it's a leap too far for the carriers to do that. So I think we're going to see, you know, them pushing 5G, which you see they're doing out there right now. Then they start talking about open rand and cloud and, and at the end of the day, all they want to do is basically sell you a plan, give you a phone attached to that and try to make as much money out of you as they possibly can. And they disguise that basically in the whole technology 5G open rand discussion, but they really, I don't think care. And at the end of the day, I don't think the consumers care, their model isn't built around technology. The model is built around selling your data and, and that's their fundamental principle and how they do that. And I've seen them go through from 2G, 3G, 4G, 5G. Every G we see come out has a promise of something new and incredible. But what we basically get is a data plan with the minutes. Right? >> Yeah, yeah I totally right on. And I think we're going to get into the whole edge piece of what that's going to open up when you start thinking about what, what the capabilities are and this new stakeholders who are going to have an interest in the trillions of dollars on the table right now, up for grabs. But Nicholas wanted to get to you on this whole digital-first thing, because one of the things we've been saying on theCube and interviewing folks and riffing on is: If digital drives more value and there's new use cases that are going to bring on, that's going to enabled by software. There's now new stakeholders coming and saying, Hey, you know what? I need more than just a pipe. I need more than just the network. I need to actually run healthcare. I need to run education on the edge. These are now industrial and consumer related use cases. I mean, this is software. This is where software and apps shine. So cloud native can enable that. So what's your take on the industry as they start to wake up and say, holy shit, this is going to be pretty massive when you look at what's coming. Not so much what's going to be replatformed, but what's coming. >> Yeah, no, I think it's a, it's where I kind join Peter on this. There's been pretty significant, heavy innovation on the carrier side for, you know, if you think about it 30 years or so of like just reselling plans effectively, which is a virtual slice of the network that built. And all of a sudden they started competing against, you know, the heavyweights on the internet. We had, putting the bar really high in terms of, you know, latency in terms of expectation, in terms of APIs, right? We've we've heard about telecom APIs for 15 years, right? It's- nothing comes close to what you could get if you start building on top of a Stripe or a Google. So I think, it's going to be hard for a lot of those companies. What we do with our show is we try to bridge that gap. Right, we try to build on top of their infrastructure to be able to expose modern APIs, to be able to open up a programmatic interface so that innovators like Peter's are able to actually really take the user experience forward and start, building those specialized businesses across healthcare, financial services, and whatnot. >> Yeah, David Blanca and I were on the, on theCube yesterday talking about how Snowflake, a company that basically sits on top of Amazon built almost nothing on the infrastructure. Built on top of it and was successful. Peter, this is a growth thing. One of the things I want to get your thoughts on is you've had experiences in growing companies. How do you look at the growth coming into this market, Peter, because you know, you got to have new opportunities coming in. It's a growth play too. It's not just take share from someone. It's net new capabilities. >> Yeah. Here's the issue you've got with the wireless industry is that there's only a very few amount of them that actually have that last mile covered. So if you're going to build something on top of it, you're going to have to deal with the carrier, and the carrier as out of like a duopoly slash monopoly, because without their access to their network, you're not going to be able to do these incredible things. So I think we've got a real challenge there where you're going to have to get the carriers to innovate. Now you've got the CEO of Deutsche Telekom coming out yesterday saying that the OTT players aren't paying their fair share. Right, and I sit back and go, well, hang on. You're selling data to customers who basically are using that data to use apps and OTT. And now he's saying, well, they should pay as well. So not only the consumer pay, but now the OTT players should pay. It's a mixed message. So what you're going to have to do, and what we're going to have to do as a, as a growth industry is we're going to have to allow it to grow. And the only way to do that is that the carriers are going to have to have better access, allow more access to their networks, as Nico said, let the APIs has become more available. I just think that that's a leap too far. So I think we're going to be handicapped in our growth based on these carriers. And it's going to take regulators and it's going to take innovation and consumers demanding carriers, do it, otherwise, you know, you're still going to deal with the three carriers in your world. >> Yeah, That's interesting about- I was just talking to Danielle Royce, the DR here at TelcoDR. And she said, I was talking about ORAN and there's more infrastructure than needed. She said, oh, it's more software. I don't disagree with her. I do agree with it. But I also think that the ORAN points to, Nicholas, kind of this idea that there's more surface area to be had on the scale side. So standardizing hardware creates a lower fixed cost, so you can get some cost reduction. And then with standardized software, you get more enablement for hardened openness. I mean, open source is already proven. You can still be secure. And obviously Cloud was once said, could never be secure and most, is probably more secure than anything. What's your take on this whole ORAN commodity standardization mission- efforts? >> I think it's a, I mean, it goes along to the second phase, right? Of what the differentiation in telecom was, you know. Early on, specialized boxes that are very expensive. You know, that you, you, you, you get from a few vendors, then you have the transition over to a software. We lower the price, as you were mentioning. It can run on off the shelf hardware. And then we're in the transition, which is what Danielle is, is evangelizing, right. Transition towards the cloud and specifically the public cloud, because there's no such thing as a private cloud really. And, and so up and running is just another, another piece where you can make the Legos connect better effectively and just have more flexibility. And generally the, the, the game here is to also break the agenda when you- from, from the vendors, right? Because now you have a standard, so you don't necessarily need to buy the entire stack from, from the same vendors. You have a lot more flexibility. You know, you've probably followed the same debate that we've all seen, right. With a push against Huawei, for instance. Th-this is extremely hard for an operator, to start ripping out an entire vendor, because most of the time, they, they own the entire stack. But something like ORAN, now you can start mixing and matching with different vendors, but generally this is also a trend that's going to accelerate the move towards the public cloud. >> That's awesome. Peter, I want to get your thoughts because you're basically building on the cloud. And if you don't mind chime it in to kind of end the segment on this one point. People are trying to really get their minds around what refactoring means. And we've been saying, and talking about, you know, the three phases of, of waking up to the world. Reset your business, or reboot. Replatform to the cloud, and then refactor, which means take advantage of cloud enabled things, whether it's AI and other things. But first get on the platform, understand the economics, and then replatform. So the question, Peter, we'll start with you. What does refactoring actually mean and look like in a successful future execution or playbook? Can you share your thoughts, because this is what people want to get to because that's where the value will come from. That's where the iteration gets you. What's your take on this refactoring? >> Yeah, yeah. So I always, I mean, we're in the consumer business, so I'm always about what is the difference going to make for the consumer? So, whether you're, and when you look at refactoring and you look at what's happening in the space. Is what is the difference that's going to, what are the consumers going to see that's different and are they willing to pay for that? And so we can strip away the technical layers and we all get caught up in the industry with these buzzwords and terms, and we get, and at the end of the day, when it moves to the consumer, the consumer just sits there and says, so what's the value? How much am I paying? And so what we're trying to do at MobileX is, we're trying to use the cloud and we're trying to use kind of innovation into create a better experience for the consumer. One way to do that is to basically help the customer, understand their usage patents. You know, right now today, they don't understand that. Right if I asked you how much you paid for your mobile bill, you will tell me my cell phone bill is $150, but I'm going to ask you the next question How much data do you use? You go, I don't know, right? >> John: unlimited. >> And then I'd say why am I started- well you'd say limited, right. I will go. I'd go, I don't know. So I sit back and go, most customers are like you. You're basically paying for a service that you have no clear, no idea what you're getting. And it's designed by the carriers to scare you into thinking you need it. So I think we've got to get away from the buzzwords that we use as an industry and just dumb that down to what, what does that mean for a consumer? And I think that the cloud is going to allow us to create some very unique ways for consumers to interact with their device and their usage of that device. And I think that that's the holy grail for me. >> Yeah. That's a great point. And it's worth calling out because I think if the cloud can get you a 10X value at, at a reduction in costs compared to the competition, that's one benefit that people will pay for. And the other one is just, Hey, that's really cool. I want I'll, I value that, that's a valuable thing. I'll pay for it. So it's interesting that the cloud scale there, it's just a good mindset. >> Yeah. So it's always, I always like say to people, you know, I've spoken a lot to the Dish guys about what open rand is going to do and I keep saying to them, so what's the value that I'm going to get from a consumer. And they'll say, oh it's flexible pricing plans. They're now starting to talk about, okay, what the end product is of this technology. You look at ECM, right? ECM has been around for a long time. It's only now that we're to see ECM technology, get enabled. The carriers fought that for a long, long time. So there's a monumental shift that needs to take place. And it's in the four or five carriers in our counties. >> Awesome. Nicholas, what's your take on refactoring? Obviously, you know, you've got APIs, you've got all this cool software enabled. How do you get to refactoring and how do you execute through that? >> I mean, it's a little bit of a, what Peter was saying as well, right? There's the, the advantage of that point is to be, you know, all our stuff basically lives in the cloud, right. So it's opportunity to, to get that closer, you know, just having better latency, making sure that, you know, you're not losing your, your photos and your data as you lose your phone and yep. Just bet- better access in general. I, I think ultimately like the, the push to the cloud right now is it's mostly just a cost reduction. The back tick, as far as the carriers are concerned, right. They don't necessarily see how they can build that break. And then from there start interacting with the rest of the OTT world and, and, you know, Netflix is built on Amazon and companies like that, right? Like, so as you're able to get closer as a carrier to that cloud where the data lives, this is also just empowering better digital experience. >> Yeah I think that's where the that's, the proof point will be there, as they say, that's where the rubber will meet the road or proof is in the pudding, whatever expression. Once they get to that cost reduction, if they can wake up to that, whoa we can actually do something better here and make m- or if they don't someone else will. Right. That's the whole point. So, final question as we wrap up, ecosystem changeover. Lot more ecosystem action. I mean, there's a lot of vendors here at Mobile Congress, but real quick, Peter, Nicholas, your take on the future of ecosystem around this new telco. Peter, we'll start with you. >> Yeah, I look, I mean, it, it, again, it keeps coming back to, to, to where I say that consumers have driven all the ecosystems that have ever existed. And when I say consumers also to IOT as well, right? So it's not just the B to C it's also B to B. So look to the consumer and look to the business to see what pain points you can solve. And that will create the ecosystems. None of us bet on Uber, none of us bet on Airbnb. Otherwise we'd all be a lot richer than we are today. So none of us took that platform- and by the way, we've been in mobile and wireless and any kind of that space smartphone space for a long time. And we will miss those applications. And if you ask a CEO today of a telco, what's the 5G killer application, that's going to send 5G into the next atmosphere, they can't answer the question. They'll talk about drones and robotic surgeries and all things that basically will never have any value to a consumer at the end of the day. So I think we've got to go back to the consumer and that's where my focus is and say, how do we make their lives better? And that will create the ecosystem. >> Yeah, I mean, they go for the low hanging fruit. Low latency and, and whatnot. But yeah, let's, it's going to be, it's going to be, we'll see what happens. Nicolas your take on ecosystems as they develop. A lot more integrations and not customization. What's your thoughts? >> Yeah, I think so too. I mean, I think going back to, you know, again like 20- 20 years ago, the network was the product conductivity to the product. Today it's a, it's a building block, right? Something that you integrate that's part of your experience. So the same way we're seeing like conversions between telecom and financial services. Right? You see a lot of telcos trying to be banks. Banks and fintechs trying to be telcos. It's, it's a blending of that, right? So it, at the end of the day, it's like, why, what is the experience? What is the above and beyond the conductivity? Because customers, at this point, it's just not differentiated based on conductivity, kind of become just a busy commodity. So even as you look at what Peter is building, right, this, what is the experience above and beyond just buying a plan that I get out of it, or if you are a media company, you know, how do I pair my content or resolve real problems? Like for instance, we work a lot to the NBA and TikTok. They get into markets where, you know, having a video product at the end and people not being well-connected, that's a problem, right? So it's an opportunity for them to bring the building block into their ecosystem and start offering solutions that are a different shape. >> Awesome. Gentlemen, thank you so much. Both of you, both experienced entrepreneurs and executives riding the wave on the right side of history, I believe. Thanks for coming on theCube, I appreciate it. >> Thanks for having us. >> If you're not riding the wave the right way, you're driftwood. And we're going to toss it back to the studio. Adam and the team, take it from here.
SUMMARY :
ferry, host of the cube. on what you guys do, is now the fourth largest Awesome. sense of the telecom data in the actual topologies for the carriers to do that. I need to run education on the edge. heavy innovation on the carrier side for, you know, One of the things I want that the carriers are going to on the scale side. the game here is to also So the question, Peter, but I'm going to ask you the next question and just dumb that down to what, And the other one is just, I always like say to people, you know, and how do you execute that point is to be, you know, the proof point will to see what pain points you can solve. for the low hanging fruit. I mean, I think going back to, you know, riding the wave on the right Adam and the team, take it from here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Adderton | PERSON | 0.99+ |
Nicholas Gerrard | PERSON | 0.99+ |
America | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nicholas | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
$150 | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Ox Fuel | ORGANIZATION | 0.99+ |
MobileX | ORGANIZATION | 0.99+ |
TelcoDR | ORGANIZATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
OxyGo | ORGANIZATION | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
Mobile X Global, Inc. | ORGANIZATION | 0.99+ |
Mobile X Global | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Boost Mobile | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
David Blanca | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
TikTok | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
30 years | QUANTITY | 0.99+ |
Nico | PERSON | 0.99+ |
Danielle | PERSON | 0.99+ |
Both | QUANTITY | 0.99+ |
John ferry | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Nicolas Girard | PERSON | 0.99+ |
second phase | QUANTITY | 0.99+ |
NBA | ORGANIZATION | 0.99+ |
Danielle Royce | PERSON | 0.99+ |
Today | DATE | 0.99+ |
OXIO | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
trillions of dollars | QUANTITY | 0.98+ |
ORAN | ORGANIZATION | 0.98+ |
telco | ORGANIZATION | 0.97+ |
Mobile World Congress | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
five carriers | QUANTITY | 0.97+ |
20 | DATE | 0.96+ |
last night | DATE | 0.96+ |
Legos | ORGANIZATION | 0.95+ |
telcos | ORGANIZATION | 0.95+ |
three carriers | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
one point | QUANTITY | 0.94+ |
first thing | QUANTITY | 0.94+ |
fourth largest mobile | QUANTITY | 0.93+ |
first cloud | QUANTITY | 0.9+ |
One way | QUANTITY | 0.89+ |
Nicolas | PERSON | 0.89+ |
Mobile Congress | ORGANIZATION | 0.89+ |
2021 | DATE | 0.87+ |
Mobile X | TITLE | 0.85+ |
20 years ago | DATE | 0.8+ |
theCube | ORGANIZATION | 0.77+ |
three phases | QUANTITY | 0.77+ |
Bon Jovi | PERSON | 0.77+ |
ECM | TITLE | 0.73+ |
Keynote Reaction with DR
(upbeat music) >> Okay, Chloe, thank you very much. Hey folks, in here in the Cloud City We with Danielle Royston. Great to see you. Watching you up on stage, I got to say, as the CEO of TelcoDR, leader and chief executive of that company. As well as a great visionary, you laid out the vision. It's hard to debate that. I mean, I think there's people who will say that vision, is like freedom, no one can debate it. It's not going to happen. >> Yeah, there's still a lot of debate in our industry about it. There's a lot of articles being written about it. I've referenced one about, you know, should we let the dragons into the castle? For me, I think it's super obvious. I think other industries are like "Duh, we've made the move." And Telco is still like, "Hmm, we're not sure." And so, am I a visionary, I don't know. I'm just sort of just Babe Ruth-ing it a little bit. I think that's where we're going. >> You know you do, you have a lot of content, podcasts, you write blogs, you do a lot of speaking. You brought it all together on stage, right? That has got to feel good. >> Yeah. >> You've got a body of work and it came together very nicely. How did you feel up there? >> Oh my God, it's absolutely nerve wrecking. I sort of feel like, you know, could you tell if my hands were shaking? Right, could you tell that my heart was racing? >> It's a good feeling. >> I don't know. >> Come on! >> I'll be honest, I'm happy it's over, I'm happy. I think I did a really great job and I'm really happy >> Yeah, you did a great job, I love the dragon reference-- >> Have it in the can. >> Fantastic, loved the Game of Thrones vibe there. It was cool-- >> Totally. >> One of the things I wanted pick up on, I thought it was very interesting and unique was the iPhone reference 14 years ago, because that really, to me, was a similar moment because that shifted the smartphone. A computer that happened to make phone calls. And then we all knew who was the leader at that time, Nokia, Blackberry with the phones, and they became toast. That ushered in a whole another era of change, wealth creation, innovation, new things. >> Yeah. Well, up until that moment, carriers had been designing the phones themselves. They were branded with their logos. And so Steve Jobs fought for the design of the iPhone. He designed it with the consumer, with the user in mind. But I think what it really, I mean, it's such a big pivotal moment in our industry because it singled the end of voice revenue and ushered in the era of data. But it also introduced the OTT players, right? That came in through the apps and started a siphon approved from the carriers. And this is like, it's a pivotal moment in the industry, like, changed the industry forever. >> It's a step function, it was a step function change, it's obvious, everyone knew it. But what's interesting is that we were riffing yesterday about O-RAN and Android. So you have iPhone, but Android became a very successful open source project that changed the landscape of the handset. Some are saying that that kind of phenomenon is coming here. Into Telco with software, kind of like an Android model where that'll come in. What's your thoughts on that, reaction to that? >> Yeah, well the dis-aggregation of the hardware, right? We're in the iconic Erickson booth, right? They get most of their revenue from RAN, from Radio Access Networks. And now with the introduction of Open RAN, right? With 50% less CapEx, 40% less OPEX, you know, I think it's easiest for Greenfield operators like Dish, that are building a brand new network. But just this month, Vodafone announced they're going to build the world's largest Open RAN network. Change is happening and the big operators are starting to adopt Open RAN in a real big way. >> So to me, riding the dragon means taking the advantage of new opportunities on top of that dragon. Developing apps like the iPhone did. And you mentioned Android, they got it right. Remember the Windows Phone, right? They tried to take Windows and shove it to the phone-- >> Barely. >> It was a kin phone too. >> I try to delete it from my, look here, beep! >> I'm going to take this old world app and I'm going to shove it into the new world, and guess what, it failed. So if the Telco is trying to do the same thing here, it will fail, but if they start building 5G apps in the cloud and pick the cloud native and think about the consumer, isn't really that the opportunity that you're talking about? >> Well, I think it is, absolutely. And I think it's a wake up call for the vendors in our space, right? And I'm certainly trying to become a vendor with Totogi. I'm really pushing my idea. But you can't take, using your Windows example on the Windows Phone, you can't take a Windows app and stuff it onto a phone and you can't take these old school applications that were written 20 years ago and just stuff them into the cloud, right? Cloud is not a place, it's a way to design applications and it all needs to be rewritten and let's go write, rewrite it. >> It's not a destination as we always say. Let's take a step back on the keynote 'cause I know we just did a couple of highlights there, wasn't the whole thing. We were watching it, by the way, we thought you did a great job, you were very cool and calm under pressure. But take us through the core ideas in the keynote. Break down the core elements of what the talk was about. >> Yeah, I think the headline really is, you know, just like there were good and bad things about the iPhone, right? It killed voice, but introduced data and all these other things. There's good and bad things about the public cloud, right? It's not going to be smooth sailing, no downsides. And so I acknowledge that, even though I'm the self appointed queen, you know? This self appointed evangelist. And so, I think that if you completely ignore the public cloud, try to stick your head in the sand and pretend it doesn't exist, I think there's nothing but downsides for Telcos. And so I think you need to learn how to maximize the advantage there, ride he dragon, like spew some fire and, you know, get some speed and height, and then you can double your ARPU. But I think, going from there, so the next three, I was trying to give examples of what I meant by that, of why it's a double-edged sword, why it's two sides of the coin. And I think there's three areas, which is the enterprise, the network, and a relationship with subscribers. And so that really what the talk, that's what the talk is about >> The three main pillars. >> Yeah, yeah! >> Future, work, enterprise, transition, Open RAN. >> The network and then the relationship with the subscribers. >> Those are the structural elements you see. >> Yeah, yeah, yeah. >> What's the most important one you think, right now, that people are focused on? >> I mean, I think the first one, with work, that's an easy one to do, because there's not too much downside, right? I think we all learned that we could work productively from home. The reason public cloud matter there is because we had tools like Zoom and G Suite and we didn't need to be, I mean, imagine if that this had happened even 20 years ago, right? Broadband at the home wasn't ready, the tools weren't ready. I mean, it would have been, I mean a bigger disaster than it was, right? And so this is an opportunity to sort of ride this work from home wave that a lot of CEOs are saying, we're not coming back or we're going to have smaller offices. And all of those employees need fiber to their home. They need 5G at their home. I mean, if I'm a head of enterprise in a Telco, I am shifting my 5G message from like random applications or whatever, to be like, how are you getting big pipes to the home so your workers can be productive there? And that, I don't hear Telco's talking about that and that's a really big idea. >> You know, you say it's a no brainer, but it's interesting you had your buildings crumbling, which was great, very nice effect in the talk. I heard a executive, Wall Street executive the other day, talking about how, "My people will be back in the office. "I'm going to mandate vaccinations, they're going to be back "in the office, you work for me. "Even though it's an employee friendly environment "right now, I don't care". And I was shocked. I go, okay, this is just an old guy. But, and it's not just the fact that it's an old guy, old guard doing that because I take two examples of old guys, Michael Dell and Frank Slootman. >> Yeah. >> Right, Michael Dell, you know, hundred billion dollar company, Frank Slootman, hottest, you know, software company. Both of them, sort of agree. It's a no brainer. >> Yeah. >> Why should I spend all this money on buildings? And my people are going to be more productive. They love it, so. Why fight the fashion? >> Well, I think the office and I can talk about this for a long time and I know we don't have that much time, but on offices, it's a way to see when did you come in and when did you leave, and look over your shoulder and what we're working on. And that's what offices are for. Now, we tell ourselves it's about collaboration and all this other stuff. And you know, these guys are saying, "come back to the office." It's because they don't have an answer on how to manage productivity. What are you working on? Are you off, are you authentically working 40 hours a week? I want to see, I know if at least you're here, you're here. Now, you might be playing, you know, Minesweeper. You might be playing Minesweeper on your computer, but at least you were, your butt was at your computer. So yeah, I think this is a pivotal moment in work. I think Telcos could push it, to work from home. We'll get you the pipes, we'll get you the cloud-based tools to help manage productivity, to change in work style. >> Yeah, and we've covered this in theCube many times, about how software is going to enable this virtual first model, no one's actually built software for virtual first. I think that's going to happen. Again, back to your team software, but I want to ask you about software defined infrastructure. You mentioned O-RAN, and as software eats the world and eats infrastructure, you still need infrastructure. So, talk about the relationship of how you see O-RAN competing and winning with the balance of software versus the commodity argument. >> Yeah, and I think this is really where people get scared in Telco. I mean, authentically nervous, right. Where you're like, okay, really the public cloud is at that network edge, right? We're really going to like, who are we? It's an identity crisis. We're not the towers anymore. We're renting space, right? We're now dis-aggregating the network, putting the edge cloud right there and it's AWS or Google. Who are we, what do we do, are we networks? Are we a tech company? Right, and so I'm like, guys, you are your subscribers and you don't focus on that. I mean, it's kind of like a last thought. >> So you're like a therapist then too, not just an evangelist. >> I'm a little bit of a therapist. >> Okay, lay down on the couch, Telco. >> Let's talk about what your problems are. (laughs) >> They have tower issues. >> All seriousness, no but, the tower is changing is backhauling. Look at direct connects for instance. The rise of direct and killed the exchanges. I mean, broadband, backhaul, last mile, >> Yeah. >> Completely, still issues, >> Yeah. >> But it's going to software and so that's there. The other thing I want to get to quickly, I know we don't have a lot of time, is the love relationship you talk about with subscribers. We had Peter Adderton on, from a Boost Mobile, formerly Boost Mobile, earlier. He was saying, if you don't have a focus on the customer, then you're just selling minutes and that's it. >> Yeah. >> And his point was, they don't really care. >> Yeah. Let's talk about organizational energy, right? How much energy is contained within any organization, not just Telco, but any organization. To some of your people time is the hours they work per week. And then you think of that as a sack on how you're allocating your time and spending your time, right? And so I think they spend 50% of their time, maybe more, fighting servers, machines, the network, right? And having all these battles. How much of that organizational energy is dedicated to driving great subscriber experiences? And it just shrunk, right? And I think that's where the public cloud can really help them. Like ride the dragon. Let the dragon deal with some of this underlying stuff. So that you can ride a dragon, survey the land, focus on your subscriber and back to the software. Use software, just like the OTT players are doing. They are taking away your ARPU. They're siphoning your ARPU, 'cause they're providing a better customer experience. You need to compete on that dimension. Not the network, not the three Telcos in the country. You're competing again, WhatsApp, Apple, Amazon, Facebook. And you spent how much of your organizational energy to focus on that? Very small. >> And that's where digital platforms roll by, it uses the word platform, why? Because everybody wants to be a platform. Why do you want to be a platform? Because I want to be like Amazon, they're a platform. And you think about Netflix, right? It's not, you know, you don't think about Netflix UK or Netflix Spain, right? >> It's global. >> There's one Netflix >> Yeah, yeah. >> You don't think about their marketing department or their sales department or their customer service, you think about the app. >> Yeah. >> You know. One interface. And that's what digital platforms allow you to do. And granted, there's a lot of public policy to deal with, but if you're shooting satellites up in space, >> Yeah. >> You know, now, you own that space, right, global network. >> And what makes Netflix so good, I think, is that it knows you, right? It knows what you're watching and recommends things, and you're like, "Oh, I would like that, that's great." Who knows more about you than your mobile phone? Carry it everywhere you go, right? What you're watching, what you're doing, who you're calling, what time did you wake up? And right now all of that data we talked about a couple of days ago, it's trapped in siloed old systems. And like why do people think Google knows so much about you? Telco knows about you. And to start to use that to drive a great experience. >> And you've got a great relationship with Netflix. The relationship we have with our our carrier is to your admin, "can you call these guys? "I don't know, I lost the password, I can't get in". >> Right. >> It's like-- >> Or you get SIM hacked-- >> I don't have an hour and a half to call your call center 'cause you don't have a chat bot, right. >> I don't have time. >> Chat bot, right. I can't even do the chat bot because my problem is, you're like, I got to talk to someone. All of their systems are built with the intention of a human being on the other side, and there's all this awesome chat bot AI that works. >> Yeah. >> Set it free. >> Yeah, yeah, right. You almost rather go to the dentist, then calling your carrier. >> Well, we're going to wrap things up here on the keynote review. Did you achieve what you wanted to achieve? I mean, controversy, bold vision, leadership, also that came across, but people they know who you are now. You're out there and that's great news. >> Yeah. I think I rocked the Telco universe and I'm really, that was my goal, and I think I accomplish it so, very excited. >> Well, we love having you on theCUBE. It's great to have great conversations, not only are you dynamic and smart, you're causing a lot of controversy, in a good way and getting, waking people up. >> Making people talk, that's a start. >> And I think, the conversations are there. People are talking and having relationships on the ecosystem open, it's all there. Danielle Royston, you are a digital revolution, DR. Telco DR, thanks for coming to theCube. >> Thank you so much, always fun. >> Good to see you. >> Thanks. >> Of course, back to the Cloud City studios. Adam is going to take it from here and continue on day three of theCube. Adam in studio, thanks for having us and take it from here.
SUMMARY :
I got to say, as the CEO of TelcoDR, I've referenced one about, you know, You know you do, you How did you feel up there? I sort of feel like, you know, I think I did a really great job Fantastic, loved the because that shifted the smartphone. because it singled the that changed the landscape of the handset. of the hardware, right? And you mentioned Android, and I'm going to shove and you can't take these we thought you did a great job, And so I think you need Future, work, enterprise, with the subscribers. Those are the structural I think we all learned "in the office, you work for me. you know, hundred billion dollar company, Why fight the fashion? And you know, these guys are saying, I think that's going to happen. and you don't focus on that. So you're like a therapist then too, of a therapist. Okay, lay down on the couch, what your problems are. the tower is changing is backhauling. is the love relationship you And his point was, And then you think of that as a sack And you think about Netflix, right? you think about the app. platforms allow you to do. you own that space, right, global network. And to start to use that to "I don't know, I lost the 'cause you don't have a chat bot, right. I can't even do the chat You almost rather go to the dentist, but people they know who you are now. and I'm really, that was my goal, Well, we love having you on theCUBE. that's a start. And I think, the Cloud City studios.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Chloe | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Danielle Royston | PERSON | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Blackberry | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adam | PERSON | 0.99+ |
Peter Adderton | PERSON | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Boost Mobile | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
50% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two sides | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Minesweeper | TITLE | 0.99+ |
Windows | TITLE | 0.99+ |
TelcoDR | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
40% | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
three areas | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Michael Dell | PERSON | 0.99+ |
O-RAN | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
an hour and a half | QUANTITY | 0.98+ |
Radio Access Networks | ORGANIZATION | 0.98+ |
three main pillars | QUANTITY | 0.98+ |
G Suite | TITLE | 0.98+ |
14 years ago | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
this month | DATE | 0.97+ |
first model | QUANTITY | 0.96+ |
two examples | QUANTITY | 0.96+ |
hundred billion dollar | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
40 hours a week | QUANTITY | 0.96+ |
Cloud City | LOCATION | 0.95+ |
first one | QUANTITY | 0.95+ |
OPEX | ORGANIZATION | 0.94+ |
day three | QUANTITY | 0.93+ |
CapEx | ORGANIZATION | 0.92+ |
couple of days ago | DATE | 0.9+ |
Dish | ORGANIZATION | 0.9+ |
Zoom | TITLE | 0.9+ |
Totogi | ORGANIZATION | 0.89+ |
One interface | QUANTITY | 0.89+ |
Wall Street | LOCATION | 0.89+ |
Open RAN | TITLE | 0.89+ |
Netflix UK | ORGANIZATION | 0.88+ |
first | QUANTITY | 0.87+ |
DR | PERSON | 0.86+ |
theCube | ORGANIZATION | 0.83+ |
Barbara Kessler & Ryan Broadwell, AWS | AWS re:Invent 2020 Partner Network Day
>> Announcer: From around the globe, it's the CUBE with digital coverage of AWS re:Invent 2020 special coverage sponsored by AWS Global Partner Network. >> Welcome back to theCUBE's virtual coverage of AWS re:invent 2020, it's virtual this year, we're usually in person this year we have to do remote interviews because of the pandemic, but it's been a great run, a lot of great content happening here in these next three weeks of re:Invent. We've got two great guests here as part of our coverage of the APN Partner Experience. I'm your host, John Furrier. Barbara Kessler, Global APN Programs Leader, and Ryan Broadwell, Global Director of ISVs for AWS. Thanks for coming on the CUBE, Thanks for joining me. >> Hey, thanks for having us, it's great to be here. >> You know we heard of-- >> Yeah thanks for having us John. >> Thanks for coming on. Sorry we're not in person, but tons of content. I mean, there's a lot of the VODs, the main stages, but the news hitting this morning around Doug's comments from strong focus of ISVs is just a continuation. We heard that last year, but this year more focus investments there, new announcements take us through what we just heard and what it means. >> Yeah John, I'll jump in first and then let Barbara add some additional color and commentary, but I think it is a continuation for us as we look at continuing to build a momentum with our ISVs they're mission critical for us, and we hear that loud and clear from our customers. So as you think about building off what Doug was talking about, I think it's first important for us to start with, we look to help our partners build and build well-designed solutions on AWS, supporting their innovation and transformation and working together to deliver scalable, reliable, secure solutions for our customers. To facilitate this, we offer programs such as AWS SaaS Factory, that provide enablement to our ISVs to build new products, migrate single tenent environments or optimize existing SaaS Solutions on AWS. And we do this through mechanisms like Webinars, Bootcamps, Workshops and even one-on-one engagements. You know, as you talked about, we just heard from Doug announce AWS SaaS Boost, which is a ready to use open source implementation of SaaS tooling and best practices to accelerate ISV SaaS Path. Through SaaS Factory which we've worked on with many ISVs in the last few years and you're well aware of, we have lots of learnings and we've helped a lot of partners make that journey towards SaaS. Partners like BMC, CloudZero, Nasdaq, Cohesity, or F5 transform their delivery and business models to SaaS. We've had a lot of demand for this type of engagement. And we knew it was important that we come up with a scalable way to help partners accelerate their transformation. SaaS Boost provides prescriptive experience to transform applications through an intuitive tool with many core services needed to develop and operate on the AWS Cloud. In addition to that, we look to use the well-architected framework, which is proven to set the architectural best practices for designing in operating systems in the Cloud, to help ISVs build their solutions on AWS. We just launched two additional lenses in well-architected tool, to enable ISVs to conduct these reviews from within the AWS console, one SaaS environment, and one aligned with foundational technical reviews, which helps partners prepare for the technical validation in AWS Partner Programs. >> You know, the SaaS Boost, I love that I was joking on Twitter, it sounds like an energy drink. Give me some of that SaaS Boost, don't drink too many of them you get immune to two to strong out, but this is what people want Barbara. This is about the Partner Network. You guys are providing more stuff, more successful programs and capabilities. This is what the demand is for. Help me get there faster path to SaaS. Can you explain what this means for partners? What's in it for them, can you share your thoughts? >> Yeah, absolutely. And you know, Ryan talked about some of the things that we do to help partners build their ISVs and software or SaaS products. But in addition to that, we provide a number of programs and resources to help partners also grow their business through marketing and sales focused programs. That's an area that we are focused on investing deeply with our partner community. For example, we offer APN Marketing Central through which partners can find and launch free customizable marketing campaigns, or even find a marketing agency to work with that has experienced messaging AWS, it also offers APN marketing activity. We recognize that not all partners, especially if they're in their startup stages, have those investments and skill sets yet around marketing. So Marketing Academy offers self service content to teach partners who don't have that capability in house today, to how to drive awareness campaigns and build demand for their offerings. We also offer a broad set of funding benefits to help partners starting from the build stage that Ryan talks about through Sandbox Credits to support their development, all the way through marketing with Market Development Funds as they're selling with what we call our partner Opportunity Acceleration Program, which is how we fund POC to support our partners and winning new customers. We also heard Doug announce in the keynote that we are launching the ISV Accelerate Program. This is our new co-selling program for ISVs that offer compensation incentives for AWS account managers, access to co-sale specialists and reduced marketplace listing fees to help our partners continue to grow their business with us. >> You know, successful selling is amazing. You want to make money. I mean, come on, you bring it a lot to the table. Co-selling I think that's a huge point. Nice call out there. Ryan, can you give some examples of partners that have been successful with these resources? >> Hey John, thank you. Yeah, it'd be great to kind of walk through with one good example and a little bit of detail. And what we've seen with Sisense is a great example of a partner that leveraged these resources and the work that they've done with Luma Health. So Luma Health serves millions of patients, provides a Cloud-hosted patient engagement platform that connects patients and providers. You know when word about COVID started, spreading Luma helped solve a big increase in questions and concerns from patients and the providers. Luma Health saw an opportunity to create new products, to help patients and providers during the pandemic, to decide what to build and how to build it, the company wanted to analyze sentimental signal and data real-time. Using Sisense, Amazon Redshift and Amazon Web Services, Data Migration Services, Luma Health built a platform that delivered analytics and insights it needed, democratizing access to the data for all users. As a result, Luma Health uncovered insights such as facts that SMS was the preferred method of communication and that many patients had similar questions. Just three weeks after their hypothesis, Luma Health released new products based on its insights, a turn-key EHR enabled healthcare solution, zero contact check-in and COVID-19 Broadcast Messaging System. >> So a lot of good successes. The question that I would ask you guys, this is the probably what's on everyone's mind is I'm a partner, I'm growing, obviously I'm in the partner network because I'm being successful. I don't have a lot of time. I need to figure out all the stuff that you have. You have so much going on that's good for me. I don't know what to do. Can you help me figure out what resources and programs to leverage? I could imagine this is a question that I would have, I want it too, I want to make money co-sell, I want to get into this program. What's the best path? I mean, what do I do? Can you share how you help your partners get on the right road, have the right resources, What are the right programs? 'Cause it makes it more consumable. This is probably a big challenge, can you share your thoughts? >> Yeah, happy to explore that. So we certainly find a lot of opportunity to innovate with our partners and customers and a result we do offer a broad range of programs, resources, material to meet the diverse needs of those partners and customers. One focus of these programs and enablement models that we offer partners, is to help our partners build their products and build their business with us. And the other focus is to create program structures that help customers find the right partner and the right solution at the right time. But we recognize it's a lot (chuckles) and we want to make sure that our partners are easily able to find what's most relevant to them. And to deliver this more effectively for ISV partners specifically, Doug just announced the launch of ISV Partner Path. As with everything we do at AWS, this new program structure works backwards from our customers and our partners to deliver the needs of both of those audiences. When a customer identifies a need for a solution, they search for that solution based on their business needs and the outcomes that they're looking to deliver rather than searching based on a partner profile. So ISV Partner Path pivots the focus that we have today on partner-level tier badging to instead focus on solution-level validation badging that helps us better align to what our customers are looking for and how they look for software products. The new model responds to that partner and customer feedback that we've heard, it removes APN tier requirements for ISVs and introduces the ability to engage across all of the products, services, and solutions that a partner offers and it pivots the partner badge attainment. So today our partners attain badging based on a tier and moving forward, they'll attain that badging to go to market with solutions that are validated and have gone through a technical assessment to either integrate effectively or run effectively on AWS. So if you were requirements to access APN programs from differentiation to funding and co-selling, partners can engage more quickly in a more meaningful way and in a more clear path to develop their solution offering and go to market with AWS. >> Ryan anything you want to add on in terms of structural support in terms of account management and does everyone get in on a wrap? Is there certain levels of attention? When does that come into play? >> Yeah, I think Barbara has made a great point in that we have a lot of great programmatic resources, but there's also no substitution for engagement with a person. And we have Partner Development Resources available to engage with our partners and help them develop their individualized plans that help them understand how they maximize the opportunity with their customer set and expand their customer sets. This starts as soon as a partner registers with the AWS Partner Network, they're contacted by a Partner Development team member within the first business day. This is a commitment we find incredibly important to the partner. And even when we have five or more new partners registering every single day. We look to go beyond that and it's not just about onboarding to your point John, our partner team works backwards from the customer and the partner to help develop what is that joint plan? How do we focus on what strategic to the partner and what becomes strategic to our customers? With that plan our team works to activate that broadly across the team in support of achieving our joint goals. And then naturally all partnerships, we want join accountability, we want mechanisms to measure success. >> You know I talked to a lot of channel partners over the years in my career, and the Cloud it really highlights the speed and the agility feature, but it all comes down to the same thing. I want to get my solution in front of the customer, I want to make money, I want to make it easy to use, make it easy to consume. I want to leverage the Cloud. This is kind of the process, this is how it always happens. This is what they want and you guys are bringing a lot to the table and that's important. And I think co-selling having the kind of support, making it consumable is easy and super great. So I have to ask you with that, what's your advice for people who are jumping in? Because you're seeing more on boarding of ISVs than ever before. And we've been commenting on theCUBE for multiple years. We've been seeing the uptick in software SaaS ISVs. And remember Amazon is not in the SaaS business a hundred percent. And government just collapsed the platform as a service in the IS categories that highlights the fact that your entire ISV landscape is wide open and growing. So there's new ISV is coming in. (chuckles) What advice would you give them to get started, experience and -- >> Yeah, I can take that. >> Yeah. >> Yeah, I can take that one thank you. And I actually want to build on something Ryan said, we actually have more than 50 new partners joining the AWS Partner Network every single day. And so having the right structure for those partners to easily navigate and the right resources for them is something that's very top of mind for us. I think I can distill down about two primary pieces of advice from my perspective for a new partner who's trying to figure out how to work with us and get involved. First and foremost, build a relationship with your Partner Manager, help them know and understand your business, the customers that you focus on, the solutions you provide. The Partner Manager is your advocate and could be your mentor in working with AWS. Make sure they know what you're good at. Partners are able to build the best traction with our shared customers and our AWS sales team when it's very clear what they're good at and how their solutions solve specific customer problems. And specialization through programs such as competency, which validate solutions based on industry in this case or workload is really key to helping communicate that specific value. And second, I would say avail yourself of the resources available to you. We offer a number of self-serve resources, such as the new ISV Navigate Track that is launching in conjunction with ISV Partner Path that provides individuals the sort of step by step guidance to move through that engagement with us, they connect them to all the resources that they need. Marketing Central which we discussed earlier to drive marketing campaigns that can be very self-served and driven by the Partner Central, which offers a wealth of content, white papers, et cetera. That's our portal through which partners engage. And you can also access things like training and certification discounts to build your Cloud skills to support your business. But I think both of those are really important things to keep in mind for partners who are just kind of getting started with us as well as partners who've been working with us for a while now. >> Ryan, what do you want to add to that because again, there's more ISVs is coming. And again, Amazon has been very disruptive in it's enablement of partners. Not everyone fits into a nice clean bucket. I mean what looks like a category might be old and being disrupted into to a new category being developed. All these new categories and new solutions. It's hard to put people into buckets. So you have a tough job, how do you give advice to your partners? >> It is tough, and the rate of transformation continues. And the rate of innovation continues to quicken. My advice is lean in with us. We continue to invest our efforts in developing this vibrant community of partners. So lean in, we'll continue to iterate around and optimize our joint plans and activities. And we'd look to be able to continue to drive success for our customers and our partners. >> Well, you guys do a great job. I want to say I've watched the APN grow and change and evolve. Market demand is there and you got the Factory, you got the Boost, you got the Lenses, you got the Partner Network, the people. It's people equation with software so congratulations. Thanks for coming on theCUBE. >> Thank you so much, appreciate the time. >> Thank you. >> Okay, great event here, re:Invent 2020 Virtual. This is theCUBE Virtual. I'm John Furrier your host, wall-to-wall coverage with theCUBE, thanks for watching. (gentle music)
SUMMARY :
it's the CUBE with digital because of the pandemic, Hey, thanks for having but the news hitting this morning around and business models to SaaS. This is about the Partner Network. But in addition to that, it a lot to the table. and how to build it, and programs to leverage? and introduces the ability to engage and the partner to help develop So I have to ask you with that, of the resources available to you. into to a new category being developed. We continue to invest our efforts and you got the Factory, wall-to-wall coverage with theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Barbara Kessler | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ryan | PERSON | 0.99+ |
Barbara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ryan Broadwell | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
Doug | PERSON | 0.99+ |
Nasdaq | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
pandemic | EVENT | 0.99+ |
Luma Health | ORGANIZATION | 0.99+ |
AWS Global Partner Network | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Partner Central | ORGANIZATION | 0.99+ |
more than 50 new partners | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
second | QUANTITY | 0.98+ |
Cloud | TITLE | 0.98+ |
F5 | ORGANIZATION | 0.98+ |
CloudZero | ORGANIZATION | 0.97+ |
two great guests | QUANTITY | 0.97+ |
One focus | QUANTITY | 0.97+ |
Invent 2020 Partner Network Day | EVENT | 0.96+ |
SaaS Boost | TITLE | 0.96+ |
one | QUANTITY | 0.96+ |
hundred percent | QUANTITY | 0.96+ |
Invent 2020 Virtual | EVENT | 0.95+ |
AWS Partner Network | ORGANIZATION | 0.95+ |
Luma | ORGANIZATION | 0.95+ |
AWS Partner Network | ORGANIZATION | 0.95+ |
ISVs | ORGANIZATION | 0.94+ |
millions of patients | QUANTITY | 0.94+ |
first business day | QUANTITY | 0.92+ |
COVID | TITLE | 0.92+ |
re: | EVENT | 0.91+ |
one good example | QUANTITY | 0.91+ |
Sisense | ORGANIZATION | 0.91+ |
SaaS Factory | TITLE | 0.9+ |
this morning | DATE | 0.89+ |
Market Development Funds | OTHER | 0.88+ |
ISV Partner Path | TITLE | 0.87+ |
single day | QUANTITY | 0.86+ |
Monica Livingston, Intel | HPE Discover 2020
>> Narrator: From around the globe, it's theCUBE! Covering HPE Discover Virtual Experience, brought to you by HPE. >> Artificial Intelligence, Monica Livingston, hey Monica, welcome to theCUBE! >> Hi Lisa, thank you for having me. >> So, AI is a big topic, but let's just get an understanding, Intel's approach to artificial intelligence? >> Yeah, so at Intel, we look at AI As a workload and a tool that is becoming ubiquitous across all of our compute solutions. We have customers that are using AI in the Cloud, in the data center, at the Edge, so our goal is to infuse as much performance as we can for AI into our base platform and then where acceleration is needed we will have accelerator solutions for those particular areas. An example of where we are infusing AI performance into our base platform is the Intel Deep Learning Boost feature set which is in our second generation Intel Xeon Scalable Processors and this feature alone provides up to 30x performance improvement for Deep Learning Inference on the CPU over the previous generation and we are continuing infusing AI into our base platform with the third generation Intel Xeon Scalable Processors which are launching later this month. Intel will continue that leadership by including support for bfloat16. Bfloat16 is a new format that enables Deep Learning training with similar accuracy but essentially using less data so it increases AI throughput. Another example is memory, so both inference and training require quite a bit of memory and with Intel Octane for system memory, customers are able to expand large pools of memory closer to the CPU, and where that's particularly relevant is in areas where data sets are enlarged like imaging, with lots of images and lots of high resolution images, like medical diagnostic or seismic imaging, we are able to perform some of these models without tiling, and tiling is where, if you are memory-constrained, you essentially have to take that picture and chop it up in little pieces and process each piece and then stitch it back together at the end whereas that loses a lot of context for the AI model, so if you're able to process that entire picture, then you are getting a much better result and that is the benefit of having that memory accessible to the compute. So, when you are buying the latest and greatest HPE servers, you will have built-in AI performance with Intel Xeon Scalable and Octane for system memory. >> A couple things that you said that piqued my interests are 30x improvement in performance, if you talk about that with respect to the Deep Learning Booster, 30x is a huge factor and you also said that your solution from a memory perspective doesn't require tiling and I heard context. Context is key to have context in the data, to be able to understand and interpret and make inferences, so, talk to me about some of those big changes that you're releasing, what were some of the customer-compelling events or maybe industry opportunities that drove Intel to make such huge performance gains in second generation. >> Right, so second generation, these are the processors that are out now, so these are features that our customers are using today, third generation is coming out this month but for second generation, Deep Learning Boost, what's really important is the software optimization and the fact that we're able to use the hooks that we've built into the hardware but then use software to make sure that we are optimizing performance on those platforms and it's extremely relevant to talk about software in the AI space because AI solutions can get super expensive, you can easily pay 2 to 3x what you should be paying if you don't have optimized software because then what you do is you're just throwing more and more compute, more and more hardware at the problem, but it's not optimized and so what's really impactful is being able to run a vast number of AI applications on your base platform, that essentially means that you can run that in a mixed workload environment together with your other applications and you're not standing up separate infrastructure. Now, of course, there will be some applications that do need separate infrastructure that do need alliances and accelerators and for that, we will have a host of accelerators, we have FPGAs today for real time low latency inference, we have Movidius VPU for low-power vision applications at the Edge, but by and large, if you're looking at classical machine learning, if you're looking at analytics, Deep Learning inference, that can run on a base platform today and I think that's what's important in ensuring that more and more customers are able to run AI at scale, it's not just a matter of running a POC in a back lab, you do that on the infrastructure that you have available, not an issue, but when you are looking to scale, the cost is going to be significantly important and that's why it's important for us to make sure that we are building in as much performance as is feasible into the base platform and then offering software tools to allow customers to see that performance. >> Okay, so talking about the technology components, performance, memory, what's needed to scale on the technology side, I want to then kind of look at the business side, because we know a lot of customers in any industry undertake AI projects and they run into pitfalls where they're not able to even get off the ground, so converse to the technology side, what is it that you're seeing, what are the pitfalls that customers can avoid on the business side to get these AI projects designed and launched? >> Yeah, so on the business side, I mean you really have to start with a very solid business plan for why you're doing AI and it's even less about just the AI piece, but you have to have a very solid business plan for your solution as a whole. If you're doing AI just to do AI because you saw that it's a top trend for 2020 so you must do AI, that's likely going to not result in success. You have to make sure that you're understanding why you're doing AI, if you have a workload that could be easily solved, or a problem that could be easily solved with data analytics, use data analytics, AI should be used where appropriate, a way to provide true benefit and I think if you can demonstrate that, you're a long way in getting your project off the ground, and then there's several other pitfalls like data, do you have enough data, is it close enough to your compute in order to be accessible and feasible, do you have the resources that are skilled in AI that can get your solution off the ground, do you have a plan for what to do after you've deployed your solution because these files need to be maintained on a regular basis, so some sort of maintenance program needs to be in place and then infrastructure, cost can be prohibitive a lot of times if you're not able to leverage a good amount of your base infrastructure and that's really where we spend a lot of time with customers in trying to understand what their model is trying to do and can they use their base infrastructure, can they reuse as much of what they have, what is their current utilization, do they maybe have cycles in off times if their utilization is diurnal and during the night they have early Utilization, can you train your models at night rather than putting up a whole new set of infrastructure that likely will not be approved by management, let's be honest. >> And I imagine that that is all part of the joint better marketing strategy that HPE and Intel have together to have such conversations like that with customers, to help really build a robust business plan. >> Yeah, so HPE's fantastic at consulting with customers from beginning to end, looking at solutions and they've got a whole suite of storage solutions as well which are crucial for AI and Intel works together with HPE to create reference architectures for AI and then we do joint training as well. But yes, talking to your HPE rep and leveraging your ecosystem I think is incredibly important because the ecosystem is so diverse and there are a lot of resources available from ISVs to hardware providers to consulting companies that are able to support with AI. >> So Monica, the ecosystem is incredibly important, but how do you work with customers, HPE and Intel together, to help the customer, whether its in biotech or manufacturing to build an ecosystem or partnership that can help the customer really define the business plan of what they want to do to get that for us functional collaboration and buy-in and support and launch a successful AI project. >> Yeah it really does take a village, but both Intel and HPE have an extensive partner network, these are partners that we work with to optimize their solution, in HPE's case, they validate their solutions on HPE hardware to ensure that it runs smoothly and for our customers, we have the ability to match-make with partners in the ecosystem and generally, the way it works, is in specific segments, we have a list of partners that we can draw from and we introduce those to the customer, the customer generally has a couple of meetings with them to see which one is a better fit, and then they go from there, but essentially, it is just making sure that solutions are validated and optimized and then giving our customers a choice of which partners are the best fit for them. >> Last question for you, Monica, we are in the middle of COVID-19 and we see things on the news every day about contact tracing, for example, social distancing, and a lot of the things that are talked about on the news are human contact tracers, people being involved in manual processes, what are some of the opportunities that you see for AI to really help drive some of these because time is of the essence, yet, there's the ethics issue with AI, right? >> Yes, yes, and the ethics issue is not something that AI can solve on its own, unfortunately, the ethics conversation is something we need to have broader as a society and from a privacy perspective, how are we going to be mindful and respectful while also being able to use some of the data to protect society especially in a situation like this, so, contact tracing is extremely important, this is something that in areas that have a wide system of cameras installed, that's something that is doable from an algorithmic perspective and there's several partners of ours that are looking at that, and actually, the technology itself, I don't think is as insurmountable as the logistical aspect and the privacy and the ethical aspect and regulation around it, making sure that it's not used for the wrong purposes, but certainly with COVID, there is a new aspect of AI use cases, and contact tracing is obviously one of them, the others that we are seeing is essentially, companies are adapting a lot of their existing AI solutions or solutions that use AI to accommodate or to account for COVID, like, companies that have observations done and so if they were doing facial recognition either in metro stations or stadiums or banks, they now are adding features to their systems to detect social distancing, for example, or detect if somebody is wearing a mask. The technology, again, itself is not that difficult, but in the implementation and the use and the governance around it, I think, is a lot more complex, and then, I would be remiss not to mention remote learning which is huge now, I think all of our children are learning remote at this point and being able to use AI in curriculums and being able to really pinpoint where a child is having a hard time understanding a concept and then giving them more support in that area is definitely something that our partners are looking at and it's something that (webcam scrambles) with my children and the tools that they're using and so instead of reading to their teacher for their reading test, they're reading to their computer and the computer's able to pinpoint some very specific issues that maybe a teacher would not see as easily and then of course, the teacher has the ability to go back with you and listen and make sure that there weren't any issues with dialects or anything like that, so it's really just an interesting reinforcement of the teacher/student learning with the added algorithmic impact as well. >> Right, a lot of opportunity is going to come out of COVID, some maybe more accelerated than others because as you mentioned, it's very complex. Monica, I wish we had more time, this has been a really fascinating conversation about what Intel and HPE are doing with respect to AI. Glad to have you back 'cause this topic is just too big, but we thank you so much for your time. >> Thank you. >> For my guest Monica Livingston, I'm Lisa Martin, you're watching theCUBE's coverage of HPE Discover 2020, thanks for watching.
SUMMARY :
brought to you by HPE. and that is the benefit of having and make inferences, so, talk to me the cost is going to be to be accessible and feasible, do you have like that with customers, are able to support with AI. that can help the customer really define and generally, the way it and so instead of reading to their teacher Glad to have you back 'cause of HPE Discover 2020, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Monica Livingston | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
2 | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
each piece | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
30x | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
Octane | COMMERCIAL_ITEM | 0.97+ |
HPE Discover 2020 | TITLE | 0.97+ |
today | DATE | 0.96+ |
bfloat16 | COMMERCIAL_ITEM | 0.95+ |
both | QUANTITY | 0.95+ |
third generation | QUANTITY | 0.95+ |
Bfloat16 | COMMERCIAL_ITEM | 0.93+ |
this month | DATE | 0.93+ |
Xeon Scalable | COMMERCIAL_ITEM | 0.92+ |
later this month | DATE | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.91+ |
theCUBE | ORGANIZATION | 0.86+ |
one of them | QUANTITY | 0.83+ |
several partners | QUANTITY | 0.8+ |
up to 30x | QUANTITY | 0.76+ |
lot | QUANTITY | 0.76+ |
customers | QUANTITY | 0.75+ |
time | QUANTITY | 0.69+ |
couple things | QUANTITY | 0.63+ |
COVID | OTHER | 0.62+ |
Movidius | ORGANIZATION | 0.6+ |
Processors | COMMERCIAL_ITEM | 0.58+ |
Octane | ORGANIZATION | 0.56+ |
Learning Boost | OTHER | 0.56+ |
day | QUANTITY | 0.55+ |
Deep | COMMERCIAL_ITEM | 0.55+ |
images | QUANTITY | 0.53+ |
couple of | QUANTITY | 0.51+ |
Last | QUANTITY | 0.5+ |
Scalable | OTHER | 0.45+ |
COVID | TITLE | 0.43+ |
Deep Learning Boost | COMMERCIAL_ITEM | 0.39+ |
VPU | TITLE | 0.34+ |
Keynote Analysis | MIT CDOIQ 2019
>> From Cambridge, Massachusetts, it's The Cube! Covering MIT Chief Data Officer and Information Qualities Symposium 2019. Brought to you by SiliconANGLE Media. >> Welcome to Cambridge, Massachusetts everybody. You're watching The Cube, the leader in live tech coverage. My name is Dave Vellante and I'm here with my cohost Paul Gillin. And we're covering the 13th annual MIT CDOIQ conference. The Cube first started here in 2013 when the whole industry Paul, this segment of the industry was kind of moving out of the ashes of the compliance world and the data quality world and kind of that back office role, and it had this tailwind of the so called big data movement behind it. And the Chief Data Officer was emerging very strongly within as we've talked about many times in theCube, within highly regulated industries like financial services and government and healthcare and now we're seeing data professionals from all industries join this symposium at MIT as I say 13th year, and we're now seeing a lot of discussion about not only the role of the Chief Data Officer, but some of what we heard this morning from Mark Ramsey some of the failures along the way of all these north star data initiatives, and kind of what to do about it. So this conference brings together several hundred practitioners and we're going to be here for two days just unpacking all the discussions the major trends that touch on data. The data revolution, whether it's digital transformation, privacy, security, blockchain and the like. Now Paul, you've been involved in this conference for a number of years, and you've seen it evolve. You've seen that chief data officer role both emerge from the back office into a c-level executive role, and now spanning a very wide scope of responsibilities. Your thoughts? >> It's been like being part of a soap opera for the last eight years that I've been part of this conference because as you said Dave, we've gone through all of these transitions. In the early days this conference actually started as an information qualities symposium. It has evolved to become about chief data officer and really about the data as an asset to the organization. And I thought that the presentation we saw this morning, Mark Ramsey's talk, we're going to have him on later, very interesting about what they did at GlaxoSmithKline to get their arms around all of the data within that organization. Now a project like that would've unthinkable five years ago, but we've seen all of these new technologies come on board, essentially they've created a massive search engine for all of their data. We're seeing organizations beginning to get their arms around this massive problem. And along the way I say it's a soap opera because along the way we've seen failure after failure, we heard from Mark this morning that data governance is a failure too. That was news to me! All of these promising initiatives that have started and fallen flat or failed to live up to their potential, the chief data officer role has emerged out of that to finally try to get beyond these failures and really get their arms around that organizational data and it's a huge project, and it's something that we're beginning to see some organization succeed at. >> So let's talk a little bit about the role. So the chief data officer in many ways has taken a lot of the heat off the chief information officer, right? It used to be CIO stood for career is over. Well, when you throw all the data problems at an individual c-level executive, that really is a huge challenge. And so, with the cloud it's created opportunities for CIOs to actually unburden themselves of some of the crapplications and actually focus on some of the mission critical stuff that they've always been really strong at and focus their budgets there. But the chief data officer has had somewhat of an unclear scope. Different organizations have different roles and responsibilities. And there's overlap with the chief digital officer. There's a lot of emphasis on monetization whether that's increasing revenue or cutting costs. And as we heard today from the keynote speaker Mark Ramsey, a lot of the data initiatives have failed. So what's your take on that role and its viability and its longterm staying power? >> I think it's coming together. I think last year we saw the first evidence of that. I talked to a number of CDOs last year as well as some of the analysts who were at this conference, and there was pretty good clarity beginning to emerge about what they chief data officer role stood for. I think a lot of what has driven this is this digital transformation, the hot buzz word of 2019. The foundation of digital transformation is a data oriented culture. It's structuring the entire organization around data, and when you get to that point when an organization is ready to do that, then the role of the CDO I think becomes crystal clear. It's not so much just an extract transform load discipline. It's not just technology, it's not just governance. It really is getting that data, pulling that data together and putting it at the center of the organization. That's the value that the CDO can provide, I think organizations are coming around to that. >> Yeah and so we've seen over the last 10 years the decrease, the rapid decrease in cost, the cost of storage. Microprocessor performance we've talked about endlessly. And now you've got the machine intelligence piece layering in. In the early days Hadoop was the hot tech, and interesting now nobody talks even about Hadoop. Rarely. >> Yet it was discussed this morning. >> It was mentioned today. It is a fundamental component of infrastructures. >> Yeah. >> But what it did is it dramatically lowered the cost of storing data, and allowing people to leave data in place. The old adage of ship a five megabytes of code to a petabyte of data versus the reverse. Although we did hear today from Mark Ramsey that they copied all the data into a centralized location so I got some questions on that. But the point I want to make is that was really early days. We're now entered an era and it's underscored by if you look at the top five companies in terms of market cap in the US stock market, obviously Microsoft is now over a trillion. Microsoft, Apple, Amazon, Google and Facebook. Top five. They're data companies, their assets are all data driven. They've surpassed the banks, the energy companies, of course any manufacturing automobile companies, et cetera, et cetera. So they're data companies, and they're wrestling with big issues around security. You can't help but open the paper and see issues on security. Yesterday was the big Capital One. The Equifax issue was resolved in terms of the settlement this week, et cetera, et cetera. Facebook struggling mightily with whether or not how to deal fake news, how to deal with deep fakes. Recently it shut down likes for many Instagram accounts in some countries because they're trying to protect young people who are addicted to this. Well, they need to shut down likes for business accounts. So what kids are doing is they're moving over to the business Instagram accounts. Well when that happens, it exposes their emails automatically so they've all kinds of privacy landmines and people don't know how to deal with them. So this data explosion, while there's a lot of energy and excitement around it, brings together a lot of really sticky issues. And that falls right in the lap of the chief data officer, doesn't it? >> We're in uncharted territory and all of the examples you used are problems that we couldn't have foreseen, those companies couldn't have foreseen. A problem may be created but then the person who suffers from that problem changes their behavior and it creates new problems as you point out with kids shifting where they're going to communicate with each other. So these are all uncharted waters and I think it's got to be scary if you're a company that does have large amounts of consumer data in particular, consumer packaged goods companies for example, you're looking at what's happening to these big companies and these data breaches and you know that you're sitting on a lot of customer data yourself, and that's scary. So we may see some backlash to this from companies that were all bought in to the idea of the 360 degree customer view and having these robust data sources about each one of your customers. Turns out now that that's kind of a dangerous place to be. But to your point, these are data companies, the companies that business people look up to now, that they emulate, are companies that have data at their core. And that's not going to change, and that's certainly got to be good for the role of the CDO. >> I've often said that the enterprise data warehouse failed to live up to its expectations and its promises. And Sarbanes-Oxley basically saved EDW because reporting became a critical component post Enron. Mark Ramsey talked today about EDW failing, master data management failing as kind of a mapping and masking exercise. The enterprise data model which was a top down push for a sort of distraction layer, that failed. You had all these failures and so we turned to governance. That failed. And so you've had this series of issues. >> Let me just point out, what do all those have in common? They're all top down. >> Right. >> All top down initiatives. And what Glaxo did is turn that model on its head and left the data where it was. Went and discovered it and figured it out without actually messing with the data. That may be the difference that changes the game. >> Yeah and it's prescription was basically taking a tactical approach to that problem, start small, get quick hits. And then I think they selected a workload that was appropriate for solving this problem which was clinical trials. And I have some questions for him. And of the big things that struck me is the edge. So as you see a new emerging data coming out of the edge, how are organizations going to deal with that? Because I think a lot of what he was talking about was a lot of legacy on-prem systems and data. Think about JEDI, a story we've been following on SiliconANGLE the joint enterprise defense infrastructure. This is all about the DOD basically becoming cloud enabled. So getting data out into the field during wartime fast. We're talking about satellite data, you're talking about telemetry, analytics, AI data. A lot of distributed data at the edge bringing new challenges to how organizations are going to deal with data problems. It's a whole new realm of complexity. >> And you talk about security issues. When you have a lot of data at the edge and you're sending data to the edge, you're bringing it back in from the edge, every device in the middle is from the smart thermostat. at the edge all the way up to the cloud is a potential failure point, a potential vulnerability point. These are uncharted waters, right? We haven't had to do this on a large scale. Organizations like the DOD are going to be the ones that are going to be the leaders in figuring this out because they are so aggressive. They have such an aggressive infrastructure and place. >> The other question I had, striking question listening to Mark Ramsey this morning. Again Mark Ramsey was former data God at GSK, GlaxoSmithKline now a consultant. We're going to hear from a number of folks like him and chief data officers. But he basically kind of poopooed, he used the example of build it and they will come. You know the Kevin Costner movie Field of Dreams. Don't go after the field of dreams. So my question is, and I wonder if you can weigh in on this is, everywhere we go we hear about digital transformation. They have these big digital transformation projects, they generally are top down. Every CEO wants to get digital right. Is that the wrong approach? I want to ask Mark Ramsey that. Are they doing field of dreams type stuff? Is it going to be yet another failure of traditional legacy systems to try to compete with cloud native and born in data era companies? >> Well he mentioned this morning that the research is already showing that digital transformation most initiatives are failing. Largely because of cultural reasons not technical reasons, and I think Ramsey underscored that point this morning. It's interesting that he led off by mentioning business process reengineering which you remember was a big fad in the 1990s, companies threw billions of dollars at trying to reinvent themselves and most of them failed. Is digital transformation headed down the same path? I think so. And not because the technology isn't there, it's because creating a culture where you can break down these silos and you can get everyone oriented around a single view of the organizations data. The bigger the organization the less likely that is to happen. So what does that mean for the CDO? Well, chief information officer at one point we said the CIO stood for career is over. I wonder if there'll be a corresponding analogy for the CDOs at some of these big organizations when it becomes obvious that pulling all that data together is just not feasible. It sounds like they've done something remarkable at GSK, maybe we'll learn from that example. But not all organizations have the executive support, which was critical to what they did, or just the organizational will to organize themselves around that central data storm. >> And I also said before I think the CDO is taking a lot of heat off the CIO and again my inference was the GSK use case and workload was actually quite narrow in clinical trials and was well suited to success. So my takeaway in this, if I were CDO what I would be doing is trying to figure out okay how does data contribute to the monetization of my organization? Maybe not directly selling the data, but what data do I have that's valuable and how can I monetize that in terms of either saving money, supply chain, logistics, et cetera, et cetera, or making money? Some kind of new revenue opportunity. And I would super glue myself for the line of business executive and go after a small hit. You're talking about digital transformations being top down and largely failing. Shadow digital transformations is maybe the answer to that. Aligning with a line of business, focusing on a very narrow use case, and building successes up that way using data as the ingredient to drive value. >> And big ideas. I recently wrote about Experian which launched a service last called Boost that enables the consumers to actually impact their own credit scores by giving Experian access to their bank accounts to see that they are at better credit risk than maybe portrayed in the credit store. And something like 600,000 people signed up in the first six months of this service. That's an example I think of using inspiration, creating new ideas about how data can be applied And in the process by the way, Experian gains data that they can use in other context to better understand their consumer customers. >> So digital meets data. Data is not the new oil, data is more valuable than oil because you can use it multiple times. The same data can be put in your car or in your house. >> Wish we could do that with the oil. >> You can't do that with oil. So what does that mean? That means it creates more data, more complexity, more security risks, more privacy risks, more compliance complexity, but yet at the same time more opportunities. So we'll be breaking that down all day, Paul and myself. Two days of coverage here at MIT, hashtag MITCDOIQ. You're watching The Cube, we'll be right back right after this short break. (upbeat music)
SUMMARY :
and Information Qualities Symposium 2019. and the data quality world and really about the data as an asset to the organization. and actually focus on some of the mission critical stuff and putting it at the center of the organization. In the early days Hadoop was the hot tech, It is a fundamental component of infrastructures. And that falls right in the lap of and all of the examples you used I've often said that the enterprise data warehouse what do all those have in common? and left the data where it was. And of the big things that struck me is the edge. Organizations like the DOD are going to be the ones Is that the wrong approach? the less likely that is to happen. and how can I monetize that in terms of either saving money, that enables the consumers to actually Data is not the new oil, You can't do that with oil.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark Ramsey | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Paul Gillin | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2013 | DATE | 0.99+ |
Ramsey | PERSON | 0.99+ |
Kevin Costner | PERSON | 0.99+ |
Enron | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
DOD | ORGANIZATION | 0.99+ |
Experian | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
GlaxoSmithKline | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
GSK | ORGANIZATION | 0.99+ |
Glaxo | ORGANIZATION | 0.99+ |
Two days | QUANTITY | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Cambridge, Massachusetts | LOCATION | 0.99+ |
Field of Dreams | TITLE | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
Mark | PERSON | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
Yesterday | DATE | 0.99+ |
over a trillion | QUANTITY | 0.99+ |
1990s | DATE | 0.98+ |
600,000 people | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
this week | DATE | 0.98+ |
SiliconANGLE Media | ORGANIZATION | 0.98+ |
first six months | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
The Cube | TITLE | 0.98+ |
five years ago | DATE | 0.97+ |
Capital One | ORGANIZATION | 0.96+ |
first evidence | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
first | QUANTITY | 0.95+ |
MIT | ORGANIZATION | 0.93+ |
this morning | DATE | 0.91+ |
Hadoop | TITLE | 0.88+ |
one point | QUANTITY | 0.87+ |
13th year | QUANTITY | 0.86+ |
MIT CDOIQ conference | EVENT | 0.84+ |
MITCDOIQ | TITLE | 0.84+ |
each one | QUANTITY | 0.82+ |
hundred practitioners | QUANTITY | 0.82+ |
EDW | ORGANIZATION | 0.81+ |
last eight years | DATE | 0.81+ |
MIT Chief Data Officer and | EVENT | 0.81+ |
Sarbanes-Oxley | PERSON | 0.8+ |
top five companies | QUANTITY | 0.78+ |
The Cube | ORGANIZATION | 0.75+ |
Top five | QUANTITY | 0.74+ |
single view | QUANTITY | 0.7+ |
last 10 years | DATE | 0.69+ |
Boost | TITLE | 0.68+ |
a petabyte of data | QUANTITY | 0.65+ |
EDW | TITLE | 0.64+ |
SiliconANGLE | ORGANIZATION | 0.64+ |
John Healy, Intel | Red Hat Summit 2019
(upbeat music) >> Live from Boston, Massachusetts It's theCUBE covering Red Hat Summit 2019. (upbeat music) Brought to you by Red Hat. >> Welcome back live here in Boston along with Stu Miniman, I'm John Walls. You are watching The Cube. We are at the Red Hat Summit for the sixth time in our cube history. Glad to be here. Beautiful, gorgeous day Stu by the way in your hometown. >> Yeah love, beautiful day. It was a little cold when we were here two years ago, but lovely spring day here in Boston Yeah great to be here Glad you're with us here on the Cube Glad to have John Healy with us as well He is the VP of the Internet of Things group at Intel as long as the GM of Platform Management and Customer Engineering John, good morning to you. >> Good morning to you too >> You're kind of the newbie on the block in the IOT group Your data center for a long time moving over to IOT, so just if you would tell me a little bit about that transition >> Yeah, it's been good. >> What you're seeing and kind of what's exciting you about this opportunity for you. >> So it's really interesting, I spent nearly 15 years with the data center group at Intel, did a ton of work with partners like Red Hat over the years. A lot of our focus was in how we bring a lot of data center technologies and grow them somewhat beyond the basic data center. I spent a lot of time on the data network side working with com service providers and Aviv and the build out of their softwarization or cloudification if you like of the infrastructure and now moving over to IOT it's almost like I'm going to the other end of the wire. You know all of the applications and the services we were focused on were very much IOT centric You know enabling new markets, enabling customers to do things when they connected their different devices in ways they couldn't have done before. So, a lot of the focus now is on how we continue to bring those cloud technologies. A lot of things that have matured in the data center further and further down and a lot of cases to the edge in talking about the cloudification of the edge and enable new IOT services and IOT applications to fulfilled and to be delivered. >> John you bring great context to this discussion and I've said the last 10 years there was that pull of the cloud and Intel is at every single show that we go to And a lot of people haven't fully understand and grasp. They hear edge computing, they hear IOT and it's big you know orders of magnitudes more devices you know the surface area that we're going to do their but a lot of times, they're like oh well we're bringing it out of the cloud and back there and we're back in the data center I'm like no no no no no This is not the data centers that you built before, but there is connection between data centers >> Sure >> And the cloud and the edge and the edge in there so you've got good content. Help frame it a little bit as to where we are in the discussion. Some of the users, where they are in the whole IOT discussion. >> Yeah and I think we need to take a step back from looking at one demographic versus another think of IOT versus cloud It really is the continued proliferation of distributed computing. Think of that as sort of the horizontal underpinning of all... >> Absolutely. >> It's how do I enable more and more advanced intelligence and insight to be gained from the data that is being created and derived in how I run my infrastructure and relay new services and new capabilities on top of it and then you start applying that to all of the different markets and there's almost no market that you could conceive that can't take advantage of that So, as we build out data center capability and all of the underpinnings and how you best build out those platforms and take advantage of all the innovation, work with you know partners like Red Hat as being a critical component of that. So, you know we've worked with them for almost actually since the beginning, we were one of the early investors and work with a partner like Red Hat to make sure that those infrastructure components are optimized to work well together build a reference architecture that can be deployable in a data center environment whether it's in an enterprise or in a cloud vendors environment and increasingly enable them to build open and hybrid implementations Now, the reason I start there is because really we are proliferating from that pace. So if you consider, and we do, that the future is open, hybrid implementations, hybrid cloud, multi cloud where the workload can be enabled and supported by the best implementation and best environment from it. Could be the best cloud environment, the best underpinning platforms and the best solution stacks to enable that to occur. We're now moving that into realm of more and more of the IOT applications whether it's in industrial environments, it's in healthcare environments, in retail and automotive, all across the different landscape the premises is essentially the same that we insure that the right environment is created for the application to be supported and we're bringing more and more of the environmental you know capabilities of cloud like deployment cloud like management, increasingly out into those applications So, if you look at each of the different markets they're at differing points of their maturity or of their development I like to use the example of the com service provider the telecom service providers as sort of a basis of this is what happened when an entire market looked at the benefits of data center technology or server technologies and wanted the economies of scale and the openness of those environments to be appropriate and deployed in their environment, in their networks and we've seen that over the last 10 years in the journey from software SaaS for defining networking all the way through to NFV and now it's happening with cloudification of the network. Industrial environments are very very similar Decades of building you know vertically integrated solutions but not looking for the economies of scale that cloud like technology and open interfaces and open extractions can provide and we're starting to see them embark on that journey in a very similar manner. So, I see parallels as we move through from one market to the other But the basic underpinning is very similar. How we take advantage of those capabilities. >> Yeah fascinating stuff You said it's distributed architectures is where were building I look at Intel and it's fascinating to me because one the one hand everything's becoming more and more distributed yet at the same time you're baking things down into the chip as much as you can, you're working with partners at Red Hat to make sure that you know what gets baked into the kernels so you've got that give and take that it is both being as distributed as possible yet every component gets things like security built in to it and it has to work with all of the environments so it's not the discreet components that we might have had before and you talk about6 you know IT versus OT well they're becoming very similar, telecommunications is not the telecom of the dot com boom. They're doing things like NFV and the likes so you know we're starting to see IT kind of take over a lot of those environments are we not? >> Well, I think IT constructs and the abilities and capabilities of IT and it's the merging really is and we saw this you know we seen it over the last number of years it really is a marriage of both environments coming together the mechanism but though which IT will deploy and manage the infrastructure married to the expectations from a SLA and quality of service and such that's required on the network just as one example and then as we work with our partner like Red Hat, what's critically important is that we have multiparty approaches to the market which I think Stu to your point is kind of another dynamic we're seeing is that the implementation of the final solution at a platform level requires collaboration across multiple different entities, multiple different partners so if we're working with Cisco or with Dell or with Lenovo and Red Hat we're bringing together reference architectures that take advantage of the innovations in the platform, the work we're doing, the innovations into the silicone and the enabling and preservation of those innovations through the software stack. So whether its RHEL or Rev or its OSP and make sure that those are exposed and can be preserved in the implementation so then the application that sits on top of the stack can take advantage all the way down and be provisioned such that it maintains the policies and the levels of performance and such that of being defined for it. >> I'd like to you know go back to the telecom illustration that you were talking about just a movement ago and we talked about the internet of things and this explosion of devices and capabilities and the new spectrum that's being rolled out right 5G on the horizon You know very much in a nascent stage right now What is that going to do in terms of your attention or your focus because of the capabilities are going to be provided you know that I can't even imagine the kinds of speeds we're talking about the kind of capabilities we're talking about. How does that change your world? >> I think what is fundamental about 5G is how it starts to address some of the underpinning challenges in deploying multiple billions of connected endpoints or devices so IOT you know subscribes really to two things Connectivity and then the access to our unleashing of all of the data it's really those two dynamics Once you comment these devices together or provide for connectivity to and from them, you now have the ability to drive more insight from the data that they're capturing and make more intelligent and informed decisions about how you provision and then all sources of new applications and service types become possible as a result of that but there in both of those there's a challenge. How do you connect all of those devices together in a manner that's you know efficient to deploy and easy to manage and also provide for the connectivity that is very burst in nature You know there are time when you will need pretty reasonable sizeable bandwidth if it's a video type application and times when you really won't need very much at all and how do you do that in an environment that's affordable and cost effective to deploy? If you're a manufacturing plant manager running cable to every single one of your You know nodes or connectors or sensors across your production plant is a pretty orneriest task and its an expensive capital deployment, but 5G provides you the ability to provide that connectivity within your enterprise or within your factory environment in an efficient manner. It's wireless based. It also provides for the very low latency that allows for real time applications and it provides for mass deployment and management of very large numbers of endpoints so if we think of the density of 5G the low latency capability of it and then the manageability in framework that is in an environment that is predictable that is policy and SLA governed you start to address some of the really fundamental challenges that connecting vast numbers of devices that that can present. So I see 5G as a path to significantly accelerating what we have always envisioned as being the internet of things and as a result of it, new services and new service categories will be enabled on top of it that were before maybe possible but not possible in an efficient and affordable manner >> Can you give me a practical example of that or just... >> Well, if you think even a smart city as an example where the light posts and the traffic signals and kiosks are all playing a role in a connected mesh of interconnected entities you could have a situation and you know for the US audience something like an Amber Alert which we'd see where we want to you know search for a very specific license plate in the city. Well today its a pretty manual process, the Amber Alert is issued, it may be a text on your phone. We get those alerts, there's often times a display over to the smart display over the freeway but then it's up to the drivers to look out. Well just consider the possibilities when the cars using their own vision, which the autonomous driving you know evolution or revolution is allowing us progressing All of the cameras on all of the cars now become actively watching for license plates and they can pick up whether and then a car can enroll itself into or out of that service so if your car is sitting at a garage and this request comes it'll report back I'm sitting in the garage I'm not part of the mix but if it's on the freeway, it can enroll itself and start to actively search for that license plate that's an example and then all of the connected nodes across the city become points for an exchange of data to and from the different cars as they are passing by and all of that infrastructure is enabled by 5G. So that's an application that yeah we don't have it today, but it becomes a very possible application in the future. >> Alright John, so we're at Red Hat Summit and as you said Intel and Red Hat have a long partnership RHEL 8 was announced today can you give us the latest on the deep integrations and what users should be expecting. >> Yeah and what we're really excited about with Red Hat over the years we've really shared a common vision about what we believe the industry should be capable of achieving and this concept of open hybrid environment, it's open hybrid clouds we've been working with them for a long time on how we best enable that so in upstream we work well together, we collaborate on what technologies we want to see exposed and supported within the different communities and then on the downstream into the products with the example of what you're describing to do with RHEL 8 What's really exciting is we did it just as a example, we did a very large data centric launch in early April We were extremely excited to bring you know a whole portfolio of new products to the market together to expand form new CPUs all the way through to some of our storage products and memory products and the capabilities of each of those is what really needs to continued to be integrated and supported with the product portfolio that Red Hat had so with RHEL 8 we're seeing things like our DL Boost for deep learning you know taking advantage of specific accelerations within the CPU in our scalable ZM processor so it can take advantage of those and really enhance the performance and behavior of the deep learning algorithms just as one example and that's you know time to market with us on RHEL 8 we're delighted about the integration as it happened same thing with some of our memory technologies and the support for those within RHEL so a customer deploying an application knows that the innovations within the hardware within the silicone are available and manageable form the software environment that they're deploying and that's the benefit of this tight collaboration as we plan together for future you know innovations and how they can best be integrated and do the work upstream in advance of that so that the community issues whether it's open shift or open stack is enabled and capable of the support at the same time >> Internet of things just before you head off where do you want to, you're still relatively fresh right to that space, where do you think you want it to go with Intel? Like what's your vision or what are your thoughts about the kinds of areas that you'd like to explore here over the next 18-24 months? >> I think we have, first thing is an incredibly exciting market some of the examples we just spoke about, the possibilities that they open up for our customers but also for our partners to really evoke new forms of business, new revenues, new capabilities as a result of bringing the marriage of cloud technology together with the economics of you know volume technology consumption and deployment and all of those assets across into a new set of applications that IOT opens up I see tremendous opportunity to make that marriage happen but also because I've spent so much time on the infrastructure side and very much with com service providers you know I can feel the pent up desire to find ways to deploy new types of manage services and new monetization models if they can get inside the data how we do optimal deployment of networks manage infrastructure on behalf of end customers and all that becomes possible if we bring the application and the IOT closer to the infrastructure so a lot of my focus will really be on bridging across those different worlds ensuring that work with you know partners like Red Hat continue to be the developed very successfully and we open up new opportunities for each other >> Sure. An exciting time, there's no doubt about that. You're at this great convergence right? You're at the fun and games part of this with devices and that exponential growth John thanks for thanks for the time. >> Sure, thank you. >> Glad to have you here on theCUBE once again John Healy joining us from Intel back with more live from Boston you're watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Red Hat. Summit for the sixth time He is the VP of the Internet of Things kind of what's exciting So, a lot of the focus and I've said the last 10 years Some of the users, where they are in It really is the continued proliferation and all of the underpinnings NFV and the likes so you know implementation of the final solution at because of the capabilities of all of the data it's example of that or just... All of the cameras on all of latest on the deep You're at the fun and games Glad to have you here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lenovo | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
John Healy | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
John | PERSON | 0.99+ |
RHEL | TITLE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
sixth time | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
early April | DATE | 0.99+ |
today | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
Red Hat Summit 2019 | EVENT | 0.98+ |
US | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
nearly 15 years | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
each | QUANTITY | 0.96+ |
Red Hat Summit | EVENT | 0.96+ |
one example | QUANTITY | 0.95+ |
Rev | TITLE | 0.94+ |
both environments | QUANTITY | 0.91+ |
first thing | QUANTITY | 0.88+ |
Amber Alert | TITLE | 0.87+ |
Red | ORGANIZATION | 0.87+ |
one market | QUANTITY | 0.85+ |
last 10 years | DATE | 0.81+ |
The Cube | TITLE | 0.8+ |
Red Hat | TITLE | 0.78+ |
one demographic | QUANTITY | 0.78+ |
Internet of Things | ORGANIZATION | 0.77+ |
billions of connected endpoints | QUANTITY | 0.77+ |
Aviv | ORGANIZATION | 0.76+ |
single | QUANTITY | 0.73+ |
two dynamics | QUANTITY | 0.7+ |
two things | QUANTITY | 0.68+ |
theCUBE | TITLE | 0.68+ |
Decades | QUANTITY | 0.63+ |
5G | QUANTITY | 0.61+ |
next 18-24 months | DATE | 0.61+ |
Platform Management | ORGANIZATION | 0.6+ |
Hat | TITLE | 0.49+ |
Boost | OTHER | 0.47+ |
Red Hat | LOCATION | 0.46+ |
Summit | ORGANIZATION | 0.29+ |
Alex Almeida, Dell EMC and Bob Bender, Founders Federal Credit Union | Dell Technologies World 2018
>> Announcer: Live from Las Vegas, it's the Cube, covering Dell Technologies World, 2018, brought to you by Dell EMC and it's ecosystem partners. >> Well welcome back to Las Vegas, the Cube, continuing our coverage here of Dell Technologies World 2018, with some 14 thousand strong in attendance. This is day two by the way, of three days of coverage that you'll be seeing here live on the Cube. Along with Keith Townsend, I'm John Walls and we're now joined by Alex Almeida, who is the consultant of product marketing at Dell EMC, and Bob Bender who is the CTO of Founders Federal Credit Union, Bob, good to see you as well, sir. >> Thank you, thank you for having me. >> You bet, thanks for being here to both of you. First off, let's just set the table for what you do at Founders and what Founders is all about and then why Dell, and how Dell figures into your picture. >> Sure, so Founders Federal Credit Union established in 1950 we're a regional financial institution providing basic services for that area in South and North Carolina. We now service over 32 areas and we have about 210 thousand plus members. So I'm Chief Technology Officer and we're looking to Dell EMC to really give us a lift in the cyber resilience of our data, what we're trying to protect today. >> Keith and I were talking too, and said we always like hearing on the customer side of this, especially on the financial side, right? Because your concerns are grave concerns, right? We all care about our money, right? And obviously that's first and foremost for you, having trust, credibility, liability. So tell us a little bit about that thought process in general, what drives your business and how that then transfers over to DIT. >> Sure, and as a member, you look at us, big or small, you expect the same cyber resilience, protection for your personal information, you don't think there's going to be a difference there. So if you look at the Carolina's, you're going to see a significant, or the southeast, we've been picked on with malware, with that data extortion of what the name, ransomware, so we had to find a solution quickly and we looked at Dell EMC for data protection and cyber recovery to really help us in that area and really protect our data. >> So let's talk about some of the threats faced. Outside of malware, typically the line of thought is, you know what, don't assume that you can prevent getting hacked, assume that you are hacked, what personas do you guys wear as a bank, or as a credit union? >> Well, we looked at that and what we did is we get really involved and we go out and we see that event, the breach, the malware, the ransomware, and so we really thought, we lack the ability of bringing assets under governance, so how do we really roll that up so that everybody knows at any point in time, we can recover, that we have kind of a isolated recovery, an air gap, or a data bunker, and then a clean room to bring that up, a Sandbox. And we really saw that our tape media backup recovery was not going to recover for the events that were happening, the old days, you're looking at one or two critical systems that are being recovered. Today, they're locking 500, 1500 servers in a matter of minutes. So, when you rehydrate that data, you know, the deduplication, we're seeing 72 to one and that's done very fast, through the product lines of Dell EMC, significant, but when you want to rehydrate that, the data's gone, it's just not there. Well, if you take away that air gap situation, what're you left with? And if they're smart enough to figure out where your backups are, you're left with no protection, so we really needed to isolate and put off network all that critical data. And because of that 72 to one dedupe rate, and I realize we may be unique, there's others that may have to choose what those critical systems are, we're not going to have to, we're going to protect everything, every day, and so that we have a recovery point that we can point to and show management and our board and our members, such as you guys, that we can recover, that you're going to have trust in us handling your financial responsibilities. >> So what specific technologies are you guys using from Dell to create this environment in which you can recover within these isolated bubbles? >> You know, I'll let Alex talk more specific, but we really looked at the data protection solution, and a cyber solution, we said phase one, we want to stand this up very quickly because it's any minute this could happen to us. It's happening to very smart establishments. We really picked what was going to optimize our first iteration of this, and we did it quickly, so we're talking a roll out in 45 days. We used Data Domain, Avamar, DD Boost, we've got Data Protection Advisor, which gives me, whether I'm here or I'm off at another conference, or I'm showing up at the office, I get instant results of what we did the day before for that recovery. I know that we're in the petabyte storage business, I don't know when we crossed that line, but now we store you know, a huge amount of data very quickly. I mean, we took their product line and went from hours down to seconds and I can move that window any which way I want, and so it's just empowering to be able to use that product line to protect our data the way we are today. >> Yeah, I think the Dell EMC cyber recovery solution really is kind of looking at solving the problem, most people look at it from solving it as a preventative thing, how do I prevent malware from happening, how do I stop ransomware from attacking me? The thing is is that it's all about really, how are you going to recover from that? And having plan to be able to recover. And with the way we approached it, we started talking to customers like Bob, and they were really coming to us and saying, you know, this is increasing, this is an increasing problem that we're seeing and it's inevitable, we feel we're going to be attacked at some point. And you see on the news today, you know, we're only a little bit through the year and there's been a lot of news on cyber attacks and things like that. The key thing is how do you recover? So we took at that in conversations with our customers and went specifically back and designed a solution that leverages the best in industry technology that we have with our data protection portfolio. So when you look at data deduplication, you look at Data Domain, that technology in the industry provides the fastest recovery possible. And from there, that makes it realistic for companies to really say, yeah, I can recover from a ransomware attack. And the more important thing is, we look at this as the isolation piece of the solution is really where the value comes in. Not only is it to get a clean copy of the data, but you can use that for analysis of that data in that clean room to be able to detect early on problems that may be happening in your production environment. And it's really important that that recovery aspect be stressed and really the Data Domain solution is kind of the enabler there. >> It's still a really tough spot to be in, right? Because on one hand you're protecting, you're trying to prevent, so you're building the fortress as best you can, and at the same time, you're developing a recovery solution so that if there is a violation, an intrusion, you're going to be okay, but the fact is the data's gone, you know, it went out the door, and so I'm just curious psychologically, you know, how do you deal with that, with your board, with your ownership, with your customers? How do you deal with it, Alex, to your customer, just saying we're going to do all we can to keep this safe, >> Absolutely. >> But so that but is a big caviada, right? How do both of you deal with that? >> Yeah. >> First off... >> I'll say this, working with the Dell EMC engineers and their business partners, I'm sleeping better at night, and I'm not just saying that being here, what I mean is that they've shrunk my backup window, they've guaranteed me reporting and a infrastructure IQ of that environment that I have more insight, integrated, so across, holistically, my enterprise. So no longer am I adding on different components to complete backups, this backup, this company, this... I never get that insight, and I never really have the evidence that we're restoring, I can do the store and the restore at the same time and see that next day in reporting, that we're achieving that. I hear that but, but that but is a little quieter because you know, it's just a little less impactful because I'm confident now that I've got a very efficient window. I'm not effecting again, with those add on, ad hoc products, not condemning 'em, but, they're impactful to critical applications, I can see response time during peak times, the product doesn't have that effect. And it's really exciting because now I can, you know, I've got to rip and replace, I got to lift and shift, you decide what the acronyms you want to add to it, but we... The big thing I want to add, and sorry to ramble here a little, >> You're fine. >> Yep, yep. Our run books are becoming smaller. And this is, the less complex, now we're taking keep the lights on people that are very frustrated with our acronyms and our terminology and the way we're going and I'm starting to bring them into the cyber resilience, cyber security environment and they're feeling empowered and I'm getting more creative ideas and that means, more creative ideas means we're back as a business solving problems, not worrying if our backups are done at two in the morning. >> And from a Dell EMC perspective, I think we're really uniquely positioned in the industry, in that, not just from Dell EMC, but we look at all of Dell technologies, right? When we incorporate the fact that we have best in class data protection solutions to do operational recovery, disaster recovery, the next logical step is to really augment that and really start looking at cyber recovery, right? And then when you look at that and you look at the power of Dell technologies, it's really a layered approach, how do I layer my data protection solutions to do operational recovery, to do disaster recovery? And then at the same time, throw in a little RSA and SecureWorks in there into the picture and we're really uniquely positioned as a vendor in the industry, no other vendor can really handle that breadth in the industry from a cyber recovery standpoint when you throw in the likes of RSA and SecureWorks. >> So, Alex, let's drill down in the overall capability versus the rest of the industry. There's been a ton of investment in data protection, 90 million, 100 million, we're seeing unicorns pop up over just this use case of data protection. And they're making no qualms at it, they're going right at the Data Domain business. What is the message that you're going out and telling any users like Bob, that, you know what, stay the course, Data Domain, the portfolio of data protection at Dell is the best way to recover your environment in case of a breach. >> Yeah, absolutely. So in terms of that, what I say to customers I talk to every day around this, that are maybe doubting you know, going forward and what they're going to do, is that we are continuing to innovate, that Data Domain platform continues to innovate, you see that in our cloud scenarios, in the cloud, you know, use cases that we're talking about, and really kind of working together with our customers as a partner on how we apply things like cyber recovery for their workloads that go into the cloud, right? And that's really through that working relationship with customers and that very strong investment that we're making on the engineering side with our roadmaps is really what customers, at the end of the day become convinced that Data Domain is here to stay. >> So, Bob I'd love to follow up on-- >> Bob: Can I add on to that? >> Please. >> You know, I think the couple things you pointed on that I probably missed, is one, you've given me options, I can be on pram or off pram or back to on pram, and that is with the product line. And again, that integration across that, I have to have that insight, but at the end of the day, Dell EMC's product line delivers and that's what we experienced in our relationship. We're not talking about... 72 to one dedupe rate, I know that's, I triple checked the facts, it's like really, we're achieving that? That's impactful to my project lines, right? I'm no longer a bottle neck because I'm back at the projects and we're getting stuff moving and we're just not confused by the technology or the way we have to, you know, kind of bandaid them together, it's just one place to go and it delivers. And we see that delivery, especially with the growth of the Data Domain and the addition of the Sandbox, it's very exciting, we're seeing some great performance on our new systems. >> Yeah, and we hear that a lot about the flexibility of the portfolio and the data protection, the fact that, Bob mentioned it many times, making the backup window disappear is really where the heart of it is. And now Bob's team an all the customers that I've talked to and their teams can go off and actually move the business forward with more innovation and bringing more value back to the business. >> Part of security is disaster recovery. Do you guys integrate your disaster recovery practice as part of your Data Domain implementation? >> I think that's a great question. We've challenged our DR group, external also, we saw incident response component, just a big empty hole, it's missing. And I think that's a change in mindset people have to implement, as you pointed out, incident response is going to be before the disaster. And if you don't stand up, you're, look our data's gone mobile, that means it's everywhere, and we have to follow it everywhere with the same protection in the end of the day, no matter where we sit, we own it, we're responsible for it, so we have to go after it in the same protection. So I think it is part of that, we're integrating it, I think we confused a couple companies with that, but you got to stand up those foundation services, the cyber security, the data life cycle has made the cyber security become much more complex. And the use, the business use of that data is becoming more demanding, so we had to make it available, so we had to be transparent with these products and Kudos to Dell EMC and all the engineers making this happen. I don't know what I would be doing if it wasn't there for me. >> Keith: Well thank you, Bob. >> You know, and I'll tell you what strikes me a little bit about this, as we have just a final moment here, is that we think about cyber invasions and violations, what have you, we think about it on a global or a national scale. I mean, you are a very successful regional business, right? And you are just as prime of a target for malfeasance as any and you need to take these prophylactic measures just as aggressively as any enterprise. >> Right, right. If you look at the names, I mean, you just go down the list, Boeing, Mecklenburg County, City of Atlanta, you know, not to name 'em and pick on 'em but they're still recovering. And our business resilience, our reputation is all we have, we're there, you know, our critical asset is your data, that is what we say, you know, the story we tell is how we protect that and that's our services and if at the end of the day you don't trust our services, what are we? >> Alex: That's right. >> Not enough just to protect and prevent, you have to be able to recover. >> So to have a business partner that really understands, and I know I'm a little, maybe a little smaller than some of your others, but you still treat me like I'm... And you still listen to me, I bring you ideas, you say this fits, let's see what we can do. Your engineers go back and they say, you know, we can't say yes, but we can say we're going to take a different approach and come back with a solution. So it's very, very exciting to have a partner that does that with you. >> No, it's a great lesson, it is, it's great. Although, as I say goodbye here, I am a little disappointed when I heard you're from South Carolina I was expecting this wonderful southern accent to come out. (laughing) it just, Bob, what happened? >> You know, I'm an Iowa boy. >> John: You got a little yankee in ya'. >> There you go. Maybe they'll say a little more than a little. >> Alright, gentlemen, thanks for being with us. >> Thank you very much for having us. >> Thanks for sharing the Founders Federal story. Back with more from Las Vegas, you're watching the Cube, we're in Dell Technologies World 2018.
SUMMARY :
brought to you by Dell EMC and it's ecosystem partners. Bob, good to see you as well, sir. First off, let's just set the table for what you do and we have about 210 thousand plus members. and how that then transfers over to DIT. Sure, and as a member, you look at us, big or small, getting hacked, assume that you are hacked, And because of that 72 to one dedupe rate, product line to protect our data the way we are today. that leverages the best in industry technology that we have And it's really exciting because now I can, you know, and our terminology and the way we're going And then when you look at that and you look at the power of data protection at Dell is the best way is that we are continuing to innovate, and that is with the product line. and actually move the business forward with more innovation Do you guys integrate your disaster recovery practice and we have to follow it everywhere with the same protection and you need to take these prophylactic measures that is what we say, you know, the story we tell you have to be able to recover. And you still listen to me, I bring you ideas, you say I am a little disappointed when I heard you're from There you go. Thanks for sharing the Founders Federal story.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Alex Almeida | PERSON | 0.99+ |
Bob Bender | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
South Carolina | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
72 | QUANTITY | 0.99+ |
1950 | DATE | 0.99+ |
90 million | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Founders Federal Credit Union | ORGANIZATION | 0.99+ |
Avamar | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
500 | QUANTITY | 0.98+ |
14 thousand | QUANTITY | 0.98+ |
two critical systems | QUANTITY | 0.98+ |
45 days | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Mecklenburg County | LOCATION | 0.98+ |
three days | QUANTITY | 0.98+ |
about 210 thousand plus members | QUANTITY | 0.98+ |
Iowa | LOCATION | 0.98+ |
Dell Technologies World 2018 | EVENT | 0.97+ |
DD Boost | ORGANIZATION | 0.97+ |
North Carolina | LOCATION | 0.97+ |
Boeing | ORGANIZATION | 0.97+ |
Data Domain | ORGANIZATION | 0.96+ |
Carolina | LOCATION | 0.94+ |
South | LOCATION | 0.94+ |
Dell Technologies World 2018 | EVENT | 0.93+ |
SecureWorks | ORGANIZATION | 0.92+ |
over 32 areas | QUANTITY | 0.91+ |
first iteration | QUANTITY | 0.85+ |
1500 servers | QUANTITY | 0.81+ |
day two | QUANTITY | 0.78+ |
Dell Technologies World, 2018 | EVENT | 0.77+ |
two in | DATE | 0.77+ |
next day | DATE | 0.73+ |
couple | QUANTITY | 0.71+ |
RSA | ORGANIZATION | 0.69+ |
couple things | QUANTITY | 0.69+ |
Itzik Reich, Dell EMC XtremIO - Dell EMC World 2017
>> Announcer: Live from Las Vegas, it's theCUBE. Covering Dell EMC World 2017. Brought to you by Dell EMC. >> Welcome back to Dell EMC World 2017. We're live here in the Venetian in Las Vegas. Day one of the three day show. Had Michael Dell out on the keynote stage earlier today. Also had David Blaine, world famous magician. Pretty interesting performance to say the least. >> Yeah I went down to get an ice pick. (man laughing) During our break. >> We'll get into that later but it was interesting. Keith Townsend, John Walls also joined by Itzik Reich who is the CTO of XtremeIO at Dell EMC. Itzik, thanks for being with us. It's good to see you sir. >> Thank you very much. >> All the way from Tel Aviv and great to have you. Alright, so your sweet spot of the company is giving birth to a new baby today. >> There you go. >> XtremeIO X2, tell us about that. What spawned that, and then what that responses be, what you developed. >> Right, I think in order to understand Xtreme, you need to start with the beginning, the X1. So, November 2015 I was having my class reunion, meeting my ex-girlfriend, and we've launched X1. And X1 became, within two quarters, the largest sole Dell flash array in the world. From nowhere to the largest sole flash array, at least in terms of units sold to the market. Right, both Garthner and I. And it was huge. A huge building and a success for us. A success because nobody would become the number one leader. And we built them because we didn't have the life cycle to normally mature a product. Right, so you mentioned being a father. I'm a father to two daughters, lovely daughters. One of them is six years old, one of them is five. And the young one is starting to show some signs of being a really clever person. And I'm afraid that somebody will tell me, oh she can skip the first class. Because skipping class serves some association with it. Social aspects of it. So we've been really busy trying to understand XtremeIO X1. Making super stable. Today we're already about 5/9 in the market. But it also would stand to refresh the product and come with something new. So our life cycle wasn't a traditional year or year and a half of refreshing the product. It took us longer for us to X2 and this is what we announced today. So what's new with X2. The first thing is the ability to come with really Dell's XO Drive and Dell's configuration. In X1 each DAE, you could put up to 25 drives inside of the DAE. And X2 can put out up to 72 drives per DAE right. And you can scale just like before. Up to 8X bricks. It's a huge capacity which you need for the vast majority of the use cases out there they don't know. Just VDR or just a single database is right. Today XtremeIO can fill pretty much every transaction while closing including virtualization wall close. You just need a lot of capacity for thousands of VM's. So that's one of the things. The other thing we improved performance of the X2 array. And the magic story around there was that because of the thousands and thousands of customers that we're involved with really got the good insight of the workload that they are running. And what we found out is something very interesting. The majority of those customers are running workload that they're very small block size. So you storage every item that arrives in the system as a different blocks characteristic and we found that the majority of them are using very, very small block size. And we want them to improve the performance of those block sizes. The IOPS and the latency. And we also wanted to make sure that it's actually more economical cheaper than the very expensive drives that the new NVMe drives that are out there. So different design goals. Making it faster and also making it cheaper in different dimensions. So we come with a new feature called Drive Boost. In a nut shell, in a nut shell Drive Boost will give you 80% better latency for pretty much every walkthrough that is out there. >> So... With that small block sizes versus big block sizes. Why is that important? We're at a conference and we're talking a lot about digital transformation. CEO, we teased John earlier. You know he's a sports guy, he doesn't do LAG goals. >> (laughing) Sorry. >> That's alright. >> Help us understand the value of that data type. >> Sure, so you know we like to think about digital transformation but at the end of the day. You're the customer, you have a database. You'll use it on query or queries against the database. If it's a very large database, there are thousands maybe even millions of queries everyday. Those queries take time for the end user to get a response for. So let's assume that you want a monthly report. And this report normally takes nine hours to generate. If I can shrink the report crunching time to two hours instead of nine, that means that I have provided better value for the business success. Right. One of the stories is that we have a financial customer in the Middle East. They need to generate the report every month between midnight because this is where they locate their reports. Up until eight o'clock in the morning. Why eight o'clock because this is when the employees start to come to work. And every hour that they exceed after the eight hour generation they get fined by the government. So if I'm saving this customer four hours then they are not getting fined by the government for generating the report. That's a true value for the customer return. Cause those things are important. People tend to think about just performance numbers in terms of IOPS but the real magic number is latency. How quick can you make the query? Whether it's a database application or a VDI VM or just a generic web server running on a Voltron machine. Those are the important things today. >> So transactional apps. Big deal. Are these transactional apps, we learned a lot about virtualization and cloud computing to date. Are these transactional apps running in a virtualized environment or are we still relying on big heavy metal workloads going to treat IO2. >> Yeah, it's a good question. At least from my experience some would argue that anywhere between 70 to 80% of the customer that allowed it went full virtualized. So their running their entire application running either under V6 or a Microsoft type of V. So they are fully virtualized. Some of the customers are still running their workload on a traditional physical servers right. Even in the S6 at the end of the day it runs on a physical server to all day the kill in itself. But yeah, the majority of them are already there in terms of virtualization. >> So what are customers really excited about when it comes to features sets for an XIO2 versus XtremeIO version wise. >> Right, amazing question. So performance, we've already discussed performance. 80% better latency, that's not something that you get because of the usage of better CPU's. Intel moves slow, it's basically dead right. They don't give you 200% performance between generation so we wanted to do something else and solve the same problem. The other thing is quality of service. We are not cheaping NGA but it's coming soon. The ability to give a specific VM, a specific IO copying and the latency copying. And also could give you the ability to burst to more IOP's techniques needed for a couple of minutes. So quality of service I the noisy neighbor right. Somebody generate too much noise you want him to be quiet. That's what quality of service is. The other things that we've announced native replication. We found out finally of our own replication that can replicate between one XtremeO2 and another. But it's not a traditional replication. The unique thing about XtremeIO was always the cusp. The content of dressable architecture. People typically think about it as a D Duplication feature but in fact we don't have a feature called D Duplication. We analyze the data as it goes through the system itself. And we give a unique shot signature to each one of those blocks. And if the shot signature already exists in the system we dupe the block. But it's not the feature per se. That why the D Duplication's so fast on XtremeIO. So up until now the customers architecture was only applicable to writing the data into the array itself. Now it's also applicable for replicating the data. So for example if you have a data reduction of five to one which is very common in virtualize use case. Many VM, many the same template and so on. You know need to replicate four times less the data at the source to the destination target. Right. So that's a very, very big thing because you need to replicate more and more data. But the 24 hour window isn't changed. God didn't upgrade it where the server respects the time. Right. >> (laughing) Right. >> It's still 24 hours per day. So this is super important for us and we're very excited about it. And the other thing is that, again larger denser configuration of the array itself so the customer can have up until two-thirds cheaper. The drive, the cost drive of the XtremeIO in itself so it's cheaper for them to put their walkthrough on ExtemeIO. Whether to really pick up the just the database that needs all the performance in the world. So we can really become a true enterprise array with those features. >> It seems like it's got to be for you a constant chase though right. You're looking for higher performance, you're looking for lower costs. You've said you just gained 80% increase in your performance capabilities. >> Yup. >> And now people are going to be looking at you over the next Xtreme and so what next? You know, where are the gains to be had in the next generation of technology and just in terms of philosophically approaching that so what do you do. >> Yeah, yeah again another good question. I actually gave a briefing about it just earlier. So, the first thing we need to do is an industry not just the daily insists to lower the costs of the drive itself to be even cheaper than and economical drive. That's not Dell today right, the hybrid mechanical drive. You can get a more economical drive if you apply data reduction on it right. So if you're five times cheaper because of the data that's gets integrated into the array and get a different compress and different provisioning. Then you can be on par with the mechanical drives. So first we want to be on par if not cheaper. We want everybody to move to S's. And we were the first twirl for charade the portfolio of Dell EMC. That's the first thing. The second thing is to really get a better insight into your wall application, wall close. Today people analyze things like IOP's and latency but what does your application really think? Where are the cues in the application stock itself right. How can you find them out in the storage sub-system itself right. So we are on a journey to over there with our importing mechanisms. So a year and a half ago, we started a new project to completely change the reporting mechanism of the WebUI. The interface of XtremeIO right. And today you can really get to drill down into pretty much every aspect. Up until now you had to purchase a third party software that will analyze your walkthrough for you. So things like Instagram, IO's, block size, read and write like I can see pair of blocks. So you can really understand your workload. We also give you something like abnormalities. We can tell you every week this application is being fine but on that Friday for some reason the response time wasn't that good. You should go in and check it out. Maybe it's in the application there is a bottleneck. Maybe it was a bottleneck in the storage load. So you can actually find it out. But I would argue that the long term goal. That's a vision right? That I'm not announcing anything yet. Is really the ability to marriage or combine between the softer defined wall right. The input converge mechanism, to the traditional arrays right. Although SSD's not that traditional. Maybe you can have a denser configuration with very small to DAE but the performance aspect of it will not be drive from the DEA where it actually store the data but from Voltron machines. That you can spin up and down in a cloud like fishing. That will bring you all the performance that you need. That's a thing to me the only gray. The really merging between the walls. Cause there isn't one perfect answer right. The softer refined guys will tell you everything should go to softer defined storage. We will tell you everything should go to flash arrays. But really the truth is like always right in between. And this is really one of the direction that we are approaching. >> I tell you what, for now I want you to enjoy X2 for now. How about that. >> That sound good. >> It's a good day for you. And don't let that five year old skip either. I think that's a good idea too. >> Very good. Very good. Thank you very much. >> And so thanks for joining us. >> Thank you. Thanks. >> Back with more here on theCUBE. We're live in Las Vegas at Dell EMC World 2017. (exciting techno music)
SUMMARY :
Brought to you by Dell EMC. We're live here in the Venetian in Las Vegas. Yeah I went down to get an ice pick. It's good to see you sir. All the way from Tel Aviv and great to have you. what you developed. And the magic story around there was that Why is that important? (laughing) You're the customer, you have a database. So transactional apps. Some of the customers are still running So what are customers really excited about at the source to the destination target. Right. And the other thing is that, again It seems like it's got to be for you And now people are going to be looking at you of the drive itself to be even cheaper I tell you what, for now I want you to enjoy X2 for now. And don't let that five year old skip either. Thank you very much. Thank you. Back with more here on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Blaine | PERSON | 0.99+ |
Itzik Reich | PERSON | 0.99+ |
John | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
two hours | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
200% | QUANTITY | 0.99+ |
Itzik | PERSON | 0.99+ |
November 2015 | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
nine hours | QUANTITY | 0.99+ |
John Walls | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
24 hour | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
eight hour | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
eight o'clock | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Middle East | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
five year | QUANTITY | 0.99+ |
three day | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
five times | QUANTITY | 0.99+ |
two daughters | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first class | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
a year and a half ago | DATE | 0.98+ |
today | DATE | 0.98+ |
70 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
XtremeO2 | TITLE | 0.96+ |
Venetian | LOCATION | 0.96+ |
two quarters | QUANTITY | 0.95+ |
XtremeIO | TITLE | 0.95+ |
Friday | DATE | 0.95+ |
Voltron | ORGANIZATION | 0.95+ |
four times | QUANTITY | 0.94+ |
X1 | COMMERCIAL_ITEM | 0.94+ |
Day one | QUANTITY | 0.93+ |
XO Drive | COMMERCIAL_ITEM | 0.93+ |
24 hours per day | QUANTITY | 0.93+ |
XtremeIO | ORGANIZATION | 0.92+ |
S6 | COMMERCIAL_ITEM | 0.92+ |
Intel | ORGANIZATION | 0.91+ |
ORGANIZATION | 0.91+ | |
each one | QUANTITY | 0.91+ |
thousands of VM's | QUANTITY | 0.89+ |
each DAE | QUANTITY | 0.88+ |
two-thirds | QUANTITY | 0.88+ |
single database | QUANTITY | 0.87+ |
Dell EMC World 2017 | EVENT | 0.87+ |
ExtemeIO | TITLE | 0.87+ |
X2 | COMMERCIAL_ITEM | 0.86+ |
up | QUANTITY | 0.86+ |
25 drives | QUANTITY | 0.85+ |
six years old | QUANTITY | 0.84+ |
up to 72 drives | QUANTITY | 0.84+ |
EMC World 2017 | EVENT | 0.83+ |
A.J. Wineski, Shazam ITS, Inc. & Matt Waxman, Dell EMC Data Protection - Dell World 2017
>> Voiceover: Live from Las Vegas, it's theCUBE. Covering Dell EMC World 2017. Brought to you by Dell EMC. >> Okay, welcome back, everyone. We are live here in Las Vegas for Dell EMC World 2017. theCUBE's 8th year of coverage of what was once EMC World, now it's Dell EMC World. The first official show of the combination of two companies. I'm John Furrier with SiliconANGLE. My cohost this week for three days of wall-to-wall coverage, Paul Gillin. And our next guest is Max, Matt Waxman, Vice President of Product Management, Dell EMC Data Protection and A.J. Wineski, who's with UNIX and Microsoft Technologies Managers at Shazam ITS. Welcome to theCUBE, good to see you guys. >> Thanks for having us. >> Thank you. >> So data protection on stage, it's hot. I mean, it is the hottest category, both on the startup side but also customers, as they go to the cloud, are rethinking the four-wall strategy of data management, data protection. Why is, is it the cloud? What's the, why is it so hot? >> Yeah, I think you nailed it. It is very hot. It's, backup is not boring. I think customers like A.J. are talking about simplifying, automating, getting to the cloud, and so we oughtta modernize data protection. Our announcements this week were all about how we're doing that. We had a great announcement around a new appliance that's a turnkey solution, out of the box, time to value less than three hours. That's the agility that customers are really looking for. And of course our cloud data protection's evolved a lot. Great new use cases, disaster recovery now for the cloud, great use case. >> Matt, A.J., I want to get your thoughts in a second, but Matt, first talk about the dynamics that the customers are facing right now, because really there's two worlds that exist now, pure cloud native, born in the cloud. Completely different paradigm for backup and recovery, data protection, all on this scheme that has to be architected. And then companies that are moving quickly that had a data domain, had a pre-existing apps that have been doing great, but now have to be architected for that cloud, hypercloud. Those are the two hot areas. Can you just break that down real quick? >> Yeah, yeah, you know, I think you have a good framework there. Right, there are customers who will go through a re-platforming, and think about how they can move their application and its existing eco system into the cloud. That's where we've seen a lot of traction. We would call that "lift and shift." You know, move the application as is. And then this cloud native space is really different. It's developer-centric. It's thinking about "How do you cater to "the application developer who wants to build "off of a modern tool-set?" And there it's all about micro services, it's API-driven. You know, it's a-- - [John] Programmable infrastructure. >> Absolutely. >> John: Programmable backup. >> Exactly, right? That's what makes a text-- >> Alright, A.J., the proof is in the pudding, when you sit there and you look at that scenario, programmable, being agile, automations all coming down the pike, what's it look like for you? >> Well, for us, prior to having the ECS, the Elastic Cloud Storage Suite, we were running everything to backup tape. And we were having to do two sets of tapes. It was taking us two weeks sometimes to do our tape retentions. We had a set retention policies at 11 year across the board because our past backup software didn't allow us to set retention periods very well. Once we got to Elastic Cloud Storage Suite, it was couple clicks, you set retention periods, it takes care of itself. Automatically replicates to our DR site and we don't have to worry about it. It's done. I used to have three and a half FTEs who took care of backup suites all the time. I'm down to a half-guy now. So I gained back-- >> So you re-deployed those resources on other things. >> On other products; what I hired them for in the beginning, and now since that's happened, I'm able to use a lot more of those resources for the projects we should be using them for. We don't have to worry about backups like we used to. I don't have to worry at night, "Did it back up? "Did it not? Did my essential databases "get backed up to tape?" I don't have to worry about that anymore, it's done automatically. >> What was that transition like for you? Going from tape to cloud? >> Painful. It was because we were having to move everything that was on tape on to ECS. Takes a while to redo that. Finally we decided at one point that after this period, no longer are we going to be writing to tape, we're going to write everything to ECS. Just became too painful. So once that transition was done, once we made a decision that we were no longer going to tape, it was easy. >> How about the cost? I mean, you now have an operational cost instead of a capital cost in your backup equipment. Over the long-term, is this a better, a lower cost happen for you? >> Oh, much better. We're saving $350,000 a year just in backups. And over the five-year TCO of that product, it's $2.7 million that we are saving over five years for that product alone. We're a small non-profit organization that we can then, in turn, turn around and give our customers some of that money back because we're not having to charge them so much for some of the backups that we have to do. >> Matt, talk about the dynamic, you mentioned developers. This comes back down to the developer angle because, just a scenario, data is becoming the life-blood for developers, and providing that data available in that kind of infrastructure's code way, or data as code, as we say, the DataOps world, if there is one yet. But I'm a developer, okay, I want the data from the application, from an hour ago, not two weeks ago, or those backup windows used to be a hindrance to that agility. >> Yeah, yeah. >> How is that progressing, and where is that going in terms of making that totally developer-centric infrastructure? >> Yeah, I mean, I'd answer that on two fronts. I think there's the cloud-native view of that where, you know, what those developers are looking for is inherent protection. They don't want to have to worry about it. Regardless of their app framework, regardless of the size of their app. But at the same time you also have database sizes that are growing so dramatically. I mean, when I was here even two years ago, I remember talking to customers who had databases that were over a hundred terabytes was like, 1 out of 10. Now I talk to 6 out of 10, hundred, two hundred terabyte infrastructures. At a certain point you can't back up anymore. And you have to go to the more transformative-- >> And the time alone, the time is killer too. >> Absolutely, absolutely. And so customers are replicating, and how do you put the same sort of controls around replication to get the levels of data protection that you expect? >> Well we're in a world where people are, customers are collecting everything now, they're saving everything. And they don't have to save everything necessarily. They don't find out until they start to use it. Is data protection becoming more of a service, a filtering service also, of how you, of what data you really need to back up? >> Yeah, I think that gets into the whole notion of data management. And that whole space is, "How can you "leverage the information out of the data, "as opposed to just managing the infrastructure?" And through automation, we're going to enable our customers to get there. Automate the infrastructure to the point where it's completely turnkey. Set a policy, set an SLA, and go. And at that point, you're managing the metadata. Analytics become really important. We've got a really cool new offering called Enterprise Copy Data Analytics. It's a SAS-based solution. Literally log on to our website and you enter your serial number, you're off and running. Analytics, predictive recommendations, based off of machine learning. That, to me, is the transition-- >> Is that managing your copies, you mean? >> That will give you visibility into your copies, that will give you visibility into your protection levels, and it'll actually score you so you have a very simple way to understand where you're weak, where you're not. >> So this is A.J.'s point about staff efficiency. You have that machine learning, like an automated way, what used to be crawling through log data, looking at stuff, pushing buttons, and provisioning (laughs). I mean, do you see that impact on your end? >> Oh, it's huge on our end. Because in the past, our database administrators would have to write something, and if a developer needed a backup copy of that database, it took potentially days, if not weeks, depending upon the size of that, to get it from tape. Or to go back to the old tape set to do that. Now, with ECS and DD Boost, it's instantaneous. They can restore that instantaneously to where the developers need it. It's a tremendous, tremendous savings for us. >> Some recent research I've seen says that there's still a sizable minority of customers who are concerned about the private security and the integrity of their data in the cloud. Does that, is that an issue for you? >> It is. We're heavily regulated through different regulations 'cause we're in the financial services industry, so we have PCI compliance, we have FFIEC compliance, SOC compliance. That's huge. And making sure that that data is protected at all times, is encrypted from end to end, is encrypted in transmission. Those are all things that the Dell EMC Suites give us. >> Talk about your data environment, because the data industry's growing, and I remember calling up Dave Velante years ago in 2010, 2011. The companies that were selling data stuff weren't really data companies, they were selling software. And a lot of the innovation came from, we call "data full" companies. They actually had a ton of data to deal with. They had the data lakes piling up. And they had to figure it out along the way. You guys have a lot of data. >> A.J.: We do. >> Can you insight into how big the data size coming in, because Tier 2 data is very valuable. You have data lakes going to be more intelligent, and that comes another factor into the architectural question. >> Yeah, we, the amount of data we collect is enormous, and we're just starting to get into the analytics of that and how can we use that data to better serve our customers, and how can we better advertise and pull our customers in to us to provide those services for us. The data, I mean, we're doing over 90 million transactions a month is what we're coming through our system. And-- >> John: So you're data full. You're full of data. >> Oh yeah, we're full of data. (laughs) And so there's just a tremendous amount of stuff that comes through us, and that data used for analytics is very powerful for us to be able to turn around and provide services to our customers. >> Matt, talk about the dynamic of, as you get into more analytics, this brings up where the data world's going, and this where kind of the data protection question is. Okay, all this data's coming in, you got some automation in there, you got some wrangling, you got some automation stuff now, analytics surfaces the citizen analysts now decided to start poking and touching the data. Okay, so now policy's the-- how do you back that up? So you have now multiple touch points on the data. Does that impact the data protection scheme and architecture? >> Yeah, I think it does. You know, fundamentally there's going to be a shift from the traditional backup admin role. And not just managing the policy, but also managing the data itself. To a role that's more centric around managing the policy. And compliance against it. As you go to decentralized environments and centers of data as opposed to data centers, you need to rethink the whole model and-- >> John: Data center. Data. Center. >> Exactly. >> John: Not server center. >> Right. >> It's the data center. (laughs) >> Paul: As you look-- >> And data's gone mass, right, so it doesn't move very easily. >> As you move to a more distributed model in an "Internet of things" type of environment, how will that affect data protection? You have to re-architect your service? >> We have been on a journey to transform data protection. We last year talked about some new offerings in that space with our Copy Data Management and Analytics solution. And that's really oriented towards that decentralized model. It's a different approach. It's not your traditional combine-your-data-path- and-your-control-path, it's truly a decentralized distributed model. >> Paul and I were talking on the intro today with Peter Burris, our head of research at Wikibon, and we know about the business value of data, and not to spare you the abstract conversation we had, we were talking about the valuation of companies were based on the data that they have and data under-management might be a term that we're fleshing out, but the question specifically comes back down to the protection and security of the data. I mean, you look at the marketing capital of Yahoo on that hack that they had, I think you mentioned Yahoo hack, really killed the value of the company. So the data will become instrumental in the valuation, so if that's the case, if you believe that, then you got to believe that the protection is going to be super important, and that there's going to be real emphasis on ground management policies and also the value of that data. You guys talk about that in your world? You guys think that holistically and can you share some insight into that conversation? >> Yeah, I mean, I think that comes back to your very first point about "data protection is hot." It's hot because there are a lot more threats out there, and of course there's that blurry line a little bit between security and data protection sometimes, but absolutely, if you look at regulations, if you look at things like GDPR in the EU, this is going to drive an increased focus on data protection. And that's where we're focusing. - [John] And IoT doesn't make this thing any easier. >> Absolutely not. >> John: (laughs) He shook his head like, "Yeah, I know." ATMs will be devices, wearables will be using analytics to share security data and movement data of people. >> Yeah. And so, us, security is one of the top priorities, it has to be. You look at what's happened with Target and Sony and Yahoo and all the other breaches. That keeps me up at night. And being sure that, >> John: I can imagine. >> being sure that we have a stable backup is integral to our system, especially with some of the recent ransomware threats and things like that. >> Paul: Yeah, going to ask you about that. >> That's scary stuff. And one way to be sure that you are protected from that is being sure that you have, number one, a good security system, but number two, you have a good backup. >> Over half of companies now have been hit by ransomware. Is there a service, a type of service that you have specifically for companies that are worried about that? >> Yeah, we have, I think A.J. said it very well, it's a layered approach. You have to have security, you have to have backups. We have a solution called Isolated Recovery, which is all about helping our customers create a vaulted, air-gap solution as the next level of protection. And some of the largest firms out there are leveraging it today to do exactly that. It's your data. You got to get it off prem, you got to get it into a vaulted area, you got to get it off the network. >> Matt, A.J., thanks so much for sharing the insight on the data protection, great customer reference, great testimonial there in the products. Congratulations. Final question. Your take on the show, it's the first year, big story is Dell EMC World, as a customer are you kind of like, "Mmm, good, it's looking good off the tee, "middle of the fairway, you know?" >> No, I'm impressed. I was really kind of skeptic coming in last year when it was announced and "What is this going to mean?" and things like that, and just seeing this year the integration of all the technologies with Vmware and the Dell desktops, laptops, the server line, the VxRail, VxRack, and all the other suites that EMC Dell products offer, it's refreshing to me as a customer knowing that now I have that one call for just about anything in the IT world. >> As they say in the IT, "one throat to choke, "single pane of glass." We're kind of going back down, congratulations on the solution. >> Matt: Thanks very much. >> Data protection, data center, they call it for a reason, the data center, you got to protect it. It's theCUBE, bringing you all the data here from Dell EMC World 2017, I'm John Furrier with Paul Gillin with SiliconANGLE Media. We'll be right back with more, stay with us. (upbeat tech music)
SUMMARY :
Brought to you by Dell EMC. Welcome to theCUBE, good to see you guys. I mean, it is the hottest category, Yeah, I think you nailed it. that the customers are facing right now, and its existing eco system into the cloud. Alright, A.J., the proof is in the pudding, it was couple clicks, you set retention periods, So you re-deployed for the projects we should be using them for. going to tape, it was easy. Over the long-term, is this a better, for some of the backups that we have to do. data is becoming the life-blood for developers, But at the same time you also have And the time alone, to get the levels of data protection that you expect? And they don't have to save everything necessarily. Automate the infrastructure to the point where that will give you visibility into your protection levels, I mean, do you see that impact on your end? and if a developer needed a backup copy of that database, and the integrity of their data in the cloud. And making sure that that data is protected at all times, And a lot of the innovation came from, You have data lakes going to be more intelligent, and pull our customers in to us You're full of data. provide services to our customers. Matt, talk about the dynamic of, and centers of data as opposed to data centers, John: Data center. It's the data center. And data's gone mass, right, We have been on a journey to and not to spare you the abstract conversation we had, this is going to drive an increased focus on data protection. to share security data and movement data of people. and Sony and Yahoo and all the other breaches. is integral to our system, especially with Paul: Yeah, going to ask you is being sure that you have, number one, Is there a service, a type of service that you have You have to have security, you have to have backups. "middle of the fairway, you know?" and the Dell desktops, laptops, the server line, congratulations on the solution. the data center, you got to protect it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ajay Patel | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Mark Andreesen | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Matthias Becker | PERSON | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jennifer Meyer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Target | ORGANIZATION | 0.99+ |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Robert | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
OVH | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
Red Cross | ORGANIZATION | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Andy Jazzy | PERSON | 0.99+ |
Korea | LOCATION | 0.99+ |
Howard | PERSON | 0.99+ |
Sharad Singal | PERSON | 0.99+ |
DZNE | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
$2.7 million | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Matthias | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |